A student might ask "why is all this embedded in what I need to use", but if what we need now more than ever is critical thinking, the answer is not obviously "because it's useful, because we endorse your use of it, because it's inevitable or inescapable". In purely empirical terms, it would be false to say that we embedded it in those tools with a pedagogical purpose in mind: it's not our professional choice, even those of us who think well of AI. It would be false to say that the companies that are embedding it have a clear use case for that embedding in mind or have a clear understanding of what users will do with it. Or are even thinking of user preference in embedding it, any more than Mark Zuckerberg cares whether Facebook users want his vision of virtual reality to replace the current Facebook interface. I think if you have an answer better than "we have no choice but to accept what is being done to the infrastructure of the suite of instructional technology tools that undergraduates have to use", that's what is called for in this situation rather than "surrender to the inevitable".
Part of the problem is this is all based on current models and limitations - what happens if (not necessarily when, but based on the exponential speed of improvement I'm not sure I would bet against it, at least for awhile) LLM's and AI in general gets even better? Resistance is going to be exceptionally challenging. How and under what circumstances AI will be useful in all sorts of academic endeavors will happen a case at a time. If it begins to advance careers, I suspect others will follow suit.
I have been thinking a lot about this. Thanks for this insightful post!
A student might ask "why is all this embedded in what I need to use", but if what we need now more than ever is critical thinking, the answer is not obviously "because it's useful, because we endorse your use of it, because it's inevitable or inescapable". In purely empirical terms, it would be false to say that we embedded it in those tools with a pedagogical purpose in mind: it's not our professional choice, even those of us who think well of AI. It would be false to say that the companies that are embedding it have a clear use case for that embedding in mind or have a clear understanding of what users will do with it. Or are even thinking of user preference in embedding it, any more than Mark Zuckerberg cares whether Facebook users want his vision of virtual reality to replace the current Facebook interface. I think if you have an answer better than "we have no choice but to accept what is being done to the infrastructure of the suite of instructional technology tools that undergraduates have to use", that's what is called for in this situation rather than "surrender to the inevitable".
Great post.
Part of the problem is this is all based on current models and limitations - what happens if (not necessarily when, but based on the exponential speed of improvement I'm not sure I would bet against it, at least for awhile) LLM's and AI in general gets even better? Resistance is going to be exceptionally challenging. How and under what circumstances AI will be useful in all sorts of academic endeavors will happen a case at a time. If it begins to advance careers, I suspect others will follow suit.