6 Comments
User's avatar
Chitownchill's avatar

I have been thinking a lot about this. Thanks for this insightful post!

Expand full comment
Timothy Burke's avatar

A student might ask "why is all this embedded in what I need to use", but if what we need now more than ever is critical thinking, the answer is not obviously "because it's useful, because we endorse your use of it, because it's inevitable or inescapable". In purely empirical terms, it would be false to say that we embedded it in those tools with a pedagogical purpose in mind: it's not our professional choice, even those of us who think well of AI. It would be false to say that the companies that are embedding it have a clear use case for that embedding in mind or have a clear understanding of what users will do with it. Or are even thinking of user preference in embedding it, any more than Mark Zuckerberg cares whether Facebook users want his vision of virtual reality to replace the current Facebook interface. I think if you have an answer better than "we have no choice but to accept what is being done to the infrastructure of the suite of instructional technology tools that undergraduates have to use", that's what is called for in this situation rather than "surrender to the inevitable".

Expand full comment
Nicole Dyer's avatar

Great post.

Expand full comment
Liam Kelley's avatar

Thank you for writing this!!

I completely agree with you that resisting AI or pretending it doesn't exist is not an option.

However, I can't see that a middle ground is either, as ultimately it depends on a certain type of morality and it is unidentifiable and unenforceable.

You probably saw that the American Historical Association recently came out with some "guiding principles for artificial intelligence in history education."

https://www.historians.org/resource/guiding-principles-for-artificial-intelligence-in-history-education/

They are also promoting a middle ground approach, but boy oh boy, I don't see how on earth it will work. There is a matrix at the end of that page which is fascinating. This is essentially what it says:

We can use generative AI (I’ll use “LLM” as its shorter) to serve as a writing partner to help generate ideas, but we can’t get the LLM to do the actual writing, even though in helping us generate ideas, it can/will produce a lot of writing that could easily fit into our paper.

We can also get an LLM to produce a “starter bibliography,” but somehow we have to decide where this starter bibliography ends and the full bibliography begins.

As an LLM identifies works in our “starter bibliography,” we can use the LLM to summarize them, but then we have to be sure to go read them as well. Do we have to read every word? If not, is there a percentage that we have to read? This is not clear.

Then with our ideas generated in dialog with an LLM, and with our knowledge of information in articles that LLM has summarized, we write the paper. After doing so, or in the process, we can show the paper to an LLM and ask it to add additional points.

Finally, we can get an LLM to sharpen the language of our writing and to format our footnotes.

Interestingly, the AHA doesn’t say anything about primary sources. Can we upload primary sources to an LLM and get it to analyze them for us? I am not sure what the AHA thinks about that.

-- OK, so that's the middle ground that the AHA calls for, but what's left??? And how would we ever know that students actually did the rest on their own?

I think this idea of a middle ground requires a certain morality that those of us who came of age before AI can understand, but I am not convinced that those coming up now will share it, or for long.

One could argue that this plan by AHA is not the best middle ground, but I really don't see any middle ground approach working. It's going to be too forced and mechanical to say "Ok, stop there! Don't go any further with AI. Now sit down and write on your own. . ."

To be clear, I don't have a solution. I'm looking for one, but I have yet to see one that I think will work. I don't like disagreeing with what you wrote, as I really appreciate that you are out there actually thinking through these issues, but I've been thinking about this one, and just can't see how it would work (at least not for long).

Expand full comment
Mark Humphries's avatar

Hey Liam, thanks for the comment. Don't feel bad about disagreeing, that is a good thing: its how discussions should happen and its how we learn to move forward. I don't have a solution either, to be clear. Nor does anyone at the moment. I take your point that rigid guidelines fall apart too quickly, and that is because LLMs challenge a lot of the fundamental assumptions we make about authorship, originality, intelligence, analysis, and our own uniqueness as a species. What we are really asking is something like: to what degree do we accept that machines can co-author or co-create research? How do you attribute authorship and responsibility when using LLMs? My own take is that I will leave the philosophical work to someone better qualified than I. From a practical point of view, I would say that there should be two guiding principles:

1. The human is responsible. Whatever degree of co-authorship is involved, it is the human that will ultimately be the one who is associated with the content, its errors, omissions, problems, and successes. So in this context, sloppy LLM use should pose a major red flag as it calls the worth of whatever the "thing" is into question. That is s cultural shift where we attach a different meaning to sloppy errors going forward.

2. LLM use must be acknowledged and clearly stated. We need to foster a culture where we are honest about using LLMs and the purpose for which they were used. This is essential for evaluating outputs.

In my own mind, these seem to be the two essential things. Yes it does rely on a cetain level of morality and ethics and honesty on the part of the user, but that is true of any type of academic work. We have to assume that when faculty hire RAs, they credit them appropriately, that they are meticulous in their citations, and honest about their results. The replication crisis in science started long before LLMs. So I am not sure we need to reinvent the wheel and I suspect that is where the AHA was going. I think their position was actually pretty moderate, all things considered.

Again, though, I would love to have my ideas completely reshaped by strong counterarguments. I feel that I can be convinced to go in any direction on these issues so long as we concede that LLMs are a thing that exist in the world and that we must co-exist with them in some way. So many of the counter positions I have seen seem to imply that it is possible to live in a world that LLMs don't exist or that we can refuse to co-exist with them while also claiming to be authentic and engaged researchers, teaching students to think critically in the world. Pretending students can leave university unprepared to co-exist with AI and that all will be well for them on the job market and in their inner intellectual life is simply factually incorrect. Full stop.

So keep it coming, love the comments!

Expand full comment
Stephen Fitzpatrick's avatar

Part of the problem is this is all based on current models and limitations - what happens if (not necessarily when, but based on the exponential speed of improvement I'm not sure I would bet against it, at least for awhile) LLM's and AI in general gets even better? Resistance is going to be exceptionally challenging. How and under what circumstances AI will be useful in all sorts of academic endeavors will happen a case at a time. If it begins to advance careers, I suspect others will follow suit.

Expand full comment