The change has already happened.
Generative AI is now a routine part of academic life. Staff and student use has become mainstream, and the idea that we are still in the “early stages” of adoption increasingly feels like a category error. The challenge is no longer whether we allow these tools, but how we shape their everyday use in ways consistent with the purposes and values of higher education.
The first wave of innovation has passed.
The explosive phase of model development between 2022 and 2024 is over. We are unlikely to see radical leaps in capability in the near future: the focus has shifted from new architectures to software design, post-training, and integration. This gives us a small but crucial window of breathing room to consolidate our understanding and strengthen our institutional practices before the next acceleration.
The challenge now is steering use not stopping it.
The real task is to guide staff and students toward active engagement: using models to think with, rather than to think for them. Responsible use involves interpretation, reflection and dialogue, not substitution. This is about nurturing a culture of critical companionship with AI, rather than automation of thought.
Academic misconduct is not yet dominant but it is growing.
Using large language models to ‘cheat’ remains a minority practice, but it is increasing as familiarity and ease of access expand. Preserving trust in degrees means identifying where assessment security genuinely matters and designing a proportionate response. Some forms of assessment will need to remain secure while others should evolve to incorporate reflective and transparent use.
Malpractice is a sociological issue not a moral one.
We need to understand why students resort to inappropriate use of AI. Many are working long hours, commuting or caring for others. They are officially FT but living lives structured by scarcity: of time, energy, and support. Unless we recognise the structural pressures shaping their decisions, we will continue to misdiagnose the problem as individual failure rather than institutional constraint.
AI literacy and environmental awareness must go hand in hand.
Staff and students need support to understand not only how to use AI effectively, but also what it costs in data, labour, and carbon. Literacy should mean practical wisdom: knowing when to use these tools, when to abstain and how to weigh the consequences of both.
We urgently need a shared sense of responsible use.
Appropriate use will look different in different disciplines, and that variation is healthy. But universities cannot wait for subject associations to define what responsibility means in practice. Institutions need to take ownership: articulating principles, creating exemplars, and supporting experimentation.
Timing could not be worse.
All of this is happening during a period of deep fatigue and financial strain across the sector. Capacity for reflection and innovation is at a historic low. We need to be honest about that but realism should not slide into fatalism. The task is to work with the capacity we have and to protect time and space for sense-making.
Digital divides are widening fast.
Access to more powerful models is becoming a new line of inequality between institutions, disciplines, and individuals. Those who can pay for premium services or integrate APIs are already operating with a different set of possibilities than those limited to free tiers. Reflective use is itself becoming a privilege.
AI criticism orientated towards mitigation, not refusal.
The problems we face will only deepen if left unattended. A responsible critical stance (within higher education) recognises the risks of AI but focuses on mitigation rather than disengagement. The question is not “Should we use AI?” but “How do we use it well, ethically and sustainably, within the constraints of our institutions and the wider political economy of higher education?”. This engagement can be compatible with personally choosing not to use it.
