In every conversation about generative AI in higher education there comes a point where someone remarks that “everything is changing so fast”. In fact, that remark usually comes within a few minutes of the conversation starting. It’s become what one says in such conversations in order to convey one’s knowledge of the subject matter. It’s not so much that the observation is wrong as that it’s banal. It certainly is the case that the technology is developing at a rapid pace. It’s easy to forget that OpenAI launched ChatGPT only three and a half years ago. In that time we’ve seen the capabilities of models accumulate in ways that would have sounded utterly implausible at the time. ChatGPT 5.4 Pro can now produce detailed reports drawing on thousands of online sources. Claude Opus 4.6 can produce interactive web applications in response to natural language prompts. Google Gemini 3 can produce videos which are indistinguishable from the real thing. That’s before we even get to the capacity of coding agents like Codex and Claude Code to quasi-autonomously design, develop and deploy software from a minimal initial prompt. There’s been a lot of change and it doesn’t appear to be slowing down.
In my experience, though, this observation tends to have a curious effect. Rather than leading the speaker to focus on the changes so that we can better understand them, it more often leads them to throw up their arms and retreat into what we already know. Not only is it a platitude, it also makes it curiously difficult to think more deeply about these changes and what they mean for us. The fact that lots of things are changing doesn’t mean that we’re unable to make meaningful statements about what is coming. In that spirit, I want to examine some of the claims which circulate widely in these discussions but don’t receive the critical scrutiny they deserve. Each is repeated so widely as to feel self-evident, yet they are platitudes which are uninformative at best and misleading at worst. What they have in common is a failure to think seriously about the future of generative AI and how it might differ from the present:
- These tools are here to stay. No, they’re really not. The technology is here to stay, but how it finds expression in consumer and business-facing tools will continually mutate over the coming years. The broader category of ‘generative AI’ is very unlikely to disappear, but the idea that the future will simply be ChatGPT, only more powerful, is incredibly unhelpful.
- Our students need AI skills to prepare them for the workplace. What are we training them for exactly? What are the ‘AI skills’ we can teach now which will still be relevant in five years, let alone in twenty or thirty?
- AI will free us up to focus on what actually matters. When has this actually happened? Does anyone really think that productivity dividends, which are significantly overstated to begin with, won’t be absorbed by a system already under intense financial pressure? Historically, increases in productivity drive increases in standards, which can cancel them out over time.
What these claims have in common is that a refusal to think seriously about the future, or the past, leaves us trapped in a perpetual present where generative AI appears as a basically neutral force to which we must simply adapt. The rapid improvements we’re seeing in coding agents, coupled with geopolitical uncertainty and its economic implications for the AI industry, suggest we might be on the verge of another period of rapid technological change like the one that followed the initial release of ChatGPT. There are huge problems this poses for universities at a time when the sector was already struggling for a whole range of reasons that have nothing to do with technology. This is a moment that calls for serious thinking about serious problems. Abandoning some of the comforting platitudes we’ve been clinging to isn’t enough to find answers, but it is a necessary first step.
