Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Universities have already been transformed by generative AI

This Atlantic piece by Ian Bogost argues what I was trying to articulate earlier in the summer. This is how I put it at the time:

This means that universities need to treat generative AI as something that has happened. Not something that is happening or will happen. It’s not a change to prepare for or a tide we can hold back but rather a feature of our organisations that we need to understand and steer in constructive rather than destructive directions. My perception is that a surprisingly large number of academics are still locked into this sense that we’re in the early stages of a change, rather than coping with a shift that has already happened. We saw from yesterday’s deeply incremental update to GPT 5 how significant growth in capacities of the frontier models are plateauing. The innovation we’ll see in the next couple of years will be at the level of software design and affordances enabled by engineering optimisation rather than a fundamental leap in what models can do.

This is how Bogost makes the argument:

Three years later, the AI transformation is just about complete. By the spring of 2024, almost two-thirds of Harvard undergrads were drawing on the tool at least once a week. In a British survey of full-time undergraduates from December, 92 percent reported using AI in some fashion. Forty percent agreed that “content created by generative AI would get a good grade in my subject,” and nearly one in five admitted that they’ve tested that idea directly, by using AI to complete their assignments. Such numbers will only rise in the year ahead.

Where I disagree with him is that the transformation is complete. In fact I think this is quite a dangerous framing for a number of reasons. Firstly, there’s a lack of clarity about what is and isn’t acceptable use of generative AI. For example the HEPI 2025 research found that no use of LLMs received over 2/3 endorsement as legitimate by students. Secondly, we’re seeing a move from prompting-intensive to prompting-light approaches to models which is hugely significant. The cognitive labour involved in using LLMs effectively is rapidly shrinking. Thirdly, post-training and software design are going to take over from model upgrades as the driving force of competition, which means that new functionality is going to emerge in unpredictable ways. Consider how distinct NotebookLM is in relation to ChatGPT and multiply that a few times over.

In this I would argue that LLMs have become ubiquitous without being normalised. What we can expect now are normalising pressures, as it becomes increasingly untenable to imagine either that we can critique models out of existence or prohibit their use in any sort of straightforward ways. I can hear some readers groan at this ‘AI realism’ but I’ve been saying for three years that the diffusion of models, marketed directly to consumers by the most powerful companies in history whose financial fate depends on keeping this bubble inflated, that academics weren’t going to be able to stop the spread of LLMs. Yes there’s a risk of self-fulfilling prophecy if you start from a position of defeatism but I also think it was always an accurate empirical assessment of the balance of power involved in the process we are talking about. Honestly I’m also not sure we should have stopped it even if we could, even if I would have felt conflicted about that. For example imagine if Wikipedia had been invented by Meta as a commercial product selling subscriptions to universities. I would have thought the educational possibilities outweigh concerns about the model of commercialisation, though I would have been conflicted about it. It’s far from a perfect analogy but it’s an interesting thought experiment.

The place where I entirely agree with Bogost is that we need to respond to the rapid normalisation of LLMs amongst our students. This has not yet happened and unless we find a way to engage with them constructively and proactively about the everyday reality of model use, we are going to lose any capacity to steer and influence this normalisation:

“I cannot think that in this day and age that there is a student who is not using it,” Vasilis Theoharakis, a strategic-marketing professor at the Cranfield School of Management who has done research on AI in the classroom, told me. That’s what I’m seeing in the classes that I teach and hearing from the students at my school: The technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media. In the coming fall semester, this new reality will be undeniable. Higher education has been changed forever in the span of a single undergraduate career.

If we’re concerned about how students are using LLMs, we need to ask why they are inclined to use them in that way. What is it about the context, particularly their context as the particular kind of student they are, which inclines them to this use? We also need to open the black box of practice, as I’ve been putting it in recent talks, in order to recognise the sheer variety of ways in which students are using LLMs. The evidence suggests that submitting entirely LLM-generated text for assignments is far from a widespread practice. But it is growing, rather inevitably, as the use of LLMs more widely grows. We need to find a way that to intervene in how students are thinking practically about their use of models, which the weird combination of censoriousness and empirical disinterest which has been dominant heretofore renders pretty much impossible. As Bogost points out, we have to help our students grapple with the temptation for outsourcing which LLMs offer:

And like the other students I spoke with, he’s often in a rush. Wynter is a double major in educational studies and American-culture studies; he has also served as president of the Association of Black Students, and been a member of a student union and various other campus committees. Those roles sometimes feel more urgent than his classwork, he explained. If he does not attend to them, events won’t take place. “I really want to polish up all my skills and intellect during college,” he said. Even as he knows that AI can’t do the work as well, or in a way that will help him learn, “it’s always in the back of my mind: Well, AI can get this done in five seconds.”

This perfectly captures why I’m so worried about the coming year, given the evidence we saw a growth of use from small majority to near total in UK HE over the last academic. We won’t just see a continued expansion of use, we’ll see an intensification of use as existing students find new ways of using the LLMs in their work:

But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation.

There’s an obvious solution to the assessment challenges, as my colleague Drew Whitworth long ago persuaded me, which is to switch to processual forms of assessment which decentre or dispense with the outcome-centric modes which are necessarily vulnerable to software which produces outcomes in response to natural language requests. The problem is that doing processual assessment at scale is near impossible. I ran a 140 person unit with Drew last year and process only works through digital mediation, which in turn means that these become like micro-outputs which can in some cases be produced using LLMs. The solution isn’t available because of the scale on which we’re forced to teach and learn, within the contemporary political economy of higher education. I like Bogost’s concluding argument that there’s a huge redesign exercise coming and the sooner we start, the sooner we get it over with:

The existence of these stressors puts higher ed at greater risk from AI. Now professors find themselves with even more demands than they anticipated and fewer ways to get them done. The best, and perhaps the only, way out of AI’s college takeover would be to embark on a redesign of classroom practice. But with so many other things to worry about, who has the time? In this way, professors face the same challenge as their students in the year ahead: A college education will be what they make of it too. At some point, everyone on campus will have to do the work.

But I also think it’s wrong, at least in the UK context. The problem isn’t LLMs. The problem is the chaotic way in which LLMs are diffusing, coupled with a system already being stretched towards breaking point. Redesign can only mitigate the problems because ultimately without a different funding models, we’ll be working with staff:student ratios that can only compel automation rather than provide an occasion for human-centred design.

Fediverse Reactions