AI in Higher-Ed: What You Need to Know to Move from First Encounters to Actual Integration
If you still don’t get the hype around AI—or why it will transform higher education—you need to think at scale.
I’ve spent the past week doing presentations on generative AI for a variety of university audiences—administrators and professors. One thing that really stands out is that very few people in higher education seem to understand why AI is going to be so disruptive for the field. I think this is because many folks either haven’t tried AI (or have only seen the less capable free versions) or have bounced off it because they don’t see the difference between ChatGPT and what happens when we deploy AI at scale.
Encountering AI
ChatGPT is designed to fit individual use cases as it’s a chatbot programmed to interact in a conversational way. There are lots of obvious uses for ChatGPT: it can teach you to code, it can write emails, translate a document, or draw a graph from an excel spreadsheet. In some jobs, these interactions are already proving transformative.
After trying ChatGPT and failing to get it to do much more than write poems and recipes, people gradually start to learn about context and prompting. Instead of just asking ChatGPT a simple question, they discover that you can cut and paste in large amounts of text (or data, images and audio) into the prompt and ask the LLM to do something with it. This could be as simple as summarizing an article, or something more complex like filling out a standardized form with information from several other sources. But it can really be anything so long as you can clearly articulate the problem, tell the LLM what you want it to do, and provide it with the necessary information to do that task.
The usefulness of ChatGPT will, though, still vary widely from person to person. If you don’t have a need to do a lot of repetitive tasks quickly, you may well find all the hype puzzling. Even if you see the appeal, though, the reality is that fiddling around prompting an LLM is often far more time consuming than just doing a single, specific task yourself. And because ChatGPT is where most people start with AI, it’s also where a lot of people end: “I don’t see what I would do with this, so what’s the big deal?”
Understanding AI at Scale
If this is you, then you should know that the “big deal” comes when we start to understand how AI can be used to automate complex but monotonous tasks at scale. In effect, this means solving 30,000 problems or completing 100,000 tasks very, very quickly rather than one at a time.
Picture this: you are the chair of a history department, and you want to grow your majors by converting more first-year applicants into actual students. You could send an email to every prospective student, welcoming them to the department and encouraging them to get in touch if they have any questions. Unless you only have a few applicants, those emails are going to be pretty boiler plate because any sort of meaningful level of personalization just takes too much time. Still, hopefully at least a few would convert into actual enrollments, but that ROI is never guaranteed.
But what if you sent a complete list of your applicants along with the text of their individual applications to an LLM which already had access to your department’s list of prospective courses, instructor websites, and degree requirements? You could then ask it to write a truly personalized and helpful welcome email for each individual student. That letter might identify the types of courses a student would like to take based on their high school transcripts, the professors they might like to connect with based on their interests, and the various services available to them on campus according to their specific needs. It could also direct them to the scholarships for which they qualify. The most important thing is that each email would be unique and different and would only take a few seconds to produce, all for pennies--literally. Suddenly, ROI increases exponentially as costs and time plummet but the level of meaningful engagement soars.
Now: scale that up beyond your department for the university as a whole. Then try to imagine other similar use cases for the technology. You could train an LLM on all your course materials and create a study aid to help students prepare for exams. You could redeploy that same LLM (combined with state-of-the-art speech-to-text interpreters) to run individual and group discussions in large classes. With a bit more tweaking, you could have AI listen to those discussions and grade the students on their knowledge of the material (Khan Academy has already started implementing these approaches with OpenAI). As with individual ChatGPT use cases, the only real limitation is: can you clearly define the problem you need to solve, articulate the type of solution you want, and can you provide an LLM with the inputs and data necessary to complete the task? Are you starting to get it? It’s exciting, terrifying, and crystal clear where this all goes—and soon.
Integrating AI in Higher Education
AI is revolutionary because when deployed at scale it destroys the old nexus between cost, size, and depersonalization. In the old days, the larger a class became, the cheaper it was to run because of economies of scale: professor salaries remain constant, but more bums-in-seats meant more revenue. But as we all know, those saving came at a cost: bigger classes depersonalized the learning. We try to mitigate that reality with group tutorials and TAs, but in my experience, the student experience declines in proportion to the growth of class-size.
AI has the potential to explode this equation. In the near future class sizes are likely to grow quite large but paradoxically become far more personalized than they are now—at least from the student’s perspective. AI tutors will be able to explain math problems to confused students for hours on end, in hundreds of different ways until they actually understand the concept. LLMs can not only grade papers, but also allow students to retry assignments over and over until they actually “get it.” Now apply these same ideas to the delivery of student services, administration and service, and research. I agree with New York Times columnist Ezra Klein that there is no other word for this future than weird.
Closing the Knowledge Gap
To be clear, this is all deeply unsettling to me and my first instinct was either to run and hide in my office or man the barricades when I had my “ah ha” moment last winter. But as every major tech company invests heavily in the technology (and as GPUs exports are restricted by the United States), it’s becoming clear that LLMs are not going away. There is simply no longer a world in which these tools won’t exist and be part of our lives.
This brings us full circle, back to the knowledge gap. Most academics and universities seem to be still be in the stage of encountering AI and have yet to reach a point where they can clearly visualize how LLMs will actually be integrated into higher education. This is especially problematic as the AI world is evolving so quickly: if you are not actively engaged in it everyday, you miss things and those misses quickly accumulate.
As this brave new weird world takes shape, individual academics and institutions are soon going to need to begin to connect the dots and develop strategies to not only cope with AI, but to harness it and integrate it into their work. It has the potential to solve many significant issues, but only if we begin to thoughtfully and ethically prepare the ground now. This won’t be a simple task, but it’s hard to imagine there’ll be a place for those institutions that come late to the game.