“It’s not just the problem of brazen cheating. In some ways, the more insidious threat LLMs pose to undergraduate learning is the promise of instant shortcuts.”
.
That’s Paul Sagar, Reader in Political Theory at King’s College London, writing at Unherd.
He continues:
Why struggle through that difficult article, why read that complicated book, why force yourself through the problem set, when the internet can just summarize it for you?
The answer to which is: because it is only through the struggle, the forcing, the wrestling with ideas for yourself, over the course of years, that you can truly train and develop your mind. Indeed, this is the reason university humanities degrees put such a high premium on writing. Writing is thinking. Until you have tried to put your ideas on the page, you never really know if you understand them and have them under control.
Unfortunately, the truth of these facts only becomes apparent with experience—which is exactly what undergraduates lack. You may also be surprised to hear that people in their late teens and early twenties tend not to be good at putting off immediate pleasure in exchange for distant reward. Traditionally, one thing that universities were good at was teaching young people this skill by forcing them to acquire it. (One learns by doing.) LLMs, however, pose a direct threat to this entire process. They are a quick-fix drug dangled before students’ noses whose true effects appear to be the stunting of intellectual development.

That’s the problem. What’s the solution?
By this point, it is abundantly clear that the only pedagogically robust response to LLMs in universities is at least a partial return to traditional methods. Reliance on online coursework has to be reduced; a significant return to paper and pen is required. This is the only way we can guarantee that students are not cheating in (all) their submissions. It is only by demanding that they prove their knowledge directly, in person, that we can incentivize them to go away and learn properly in their own time. Everybody in higher education knows this already.
But one problem with that solution is that it’s not clear that university administrators are on board with it. At KCL, Sagar says, faculty were told that “a return to having majority exam-based assessments was straightforwardly not an option.” In part this is owed to how exams are typically administered at UK universities; Sagar notes “we have more students enrolled than we could possibly fit into exam rooms during a finite exam period.”
And there are bigger worries. Universities have long marketed themselves on the basis of the improved employment prospects of their graduates. If many students are going to college solely for the sake of improving their job prospects, but the increasing use of AI by employers for white collar tasks decreases the number of job opportunities available to college graduates, students may not see college as worth it.
You can read Sagar’s piece here.
Meanwhile, Kyle Saunders, a professor of political science at Colorado State University, has developed a website, based on his research, that provides an analysis of the “institutional resilience” and “post-college market position” of US colleges and universities. The former refers to how well the institution can absorb financial and enrollment shocks, while the latter is about how well the institution positions graduates for the labor market ahead.
(Data for it comes from IPEDS institutional characteristics, finance, enrollment, and completions (2024), College Scorecard institution-level outcomes (most recent cohort), WICHE Knocking at the College Door (11th ed., 2024), O*NET Database (v29.0), NCES CIP 2020 to SOC 2018 Crosswalk, Anthropic Economic Index (August 2025), and Census Post-Secondary Employment Outcomes (PSEO).)
He provides a map of 1,556 US institutions of higher education—which Saunders refers to as “a strategic classification heuristic, not a ranking or a predictive model”:

At the website, the above map is interactive; you can click on each of the dots, which represents a different institutions.
Or you can just search for your institution. For example, here are the findings for my university, the University of South Carolina:

You can read more about Saunders’ methodology and search for your own institution here.
UPDATE: Earlier today, Inside Higher Education reported on how popular course management software Canvas now has agentic AI features “which can automate ‘low-value’ tasks for faculty such as rubric generation, content alignment and discussion reviews.” Reportedly, “In an effort to keep humans in the loop, it purposefully built guardrails into Canvas designed to prevent instructors from fully automating grading.” Yet:
Some education experts worry that integrating agentic AI into the classroom as a time-saving measure will give institutions leverage to increase class sizes and faculty workloads. “Eventually the question may become ‘If we have so many faculty just using agentic AI, what is their value and purpose?’” said Jason Gulya, a professor of English and media communications at Berkeley College…
“If a student knows that a message or rubric was created by AI, we need to think about what it does to the relationship between the student and the professor,” he said. “We’re going to ask students to do something difficult, and if we use this technology in a way that distances the educator and student, they’re not going to do that.”
And if students and instructors begin offloading too much of their work to AI agents, Guyla said it could eventually result in a classroom mostly void of human interaction and engagement.
“That’s absolutely possible if we’re not careful,” he said. “Ed tech is often pushing us toward that dead classroom theory. There’s a chance to rethink it, but it’s going to be on higher ed to do the heavy lifting.”
Comments (0)