An initial note as I prepare this 1 hour briefing session for later this year:
- The capabilities of generative AI have grown significantly in just over three years since ChatGPT’s launch in November 2022. These aren’t the tentative, error-prone systems we saw initially. They’re increasingly fluent and capable.
- The uptake has been just as rapid. ChatGPT now has 800 million weekly active users globally. This isn’t a technology we’re preparing for — it’s already mainstream.
- The technology is still developing, but differently. We’re seeing less dramatic improvement in the underlying models and more emphasis on software integration and new functionality. The focus has shifted from making models smarter to making them more embedded in how we work.
- There’s strong evidence that most students in UK universities are now using generative AI at least some of the time. HEPI’s 2025 data shows 92% of undergraduates have used these tools, with 88% using them in assessment.
- There are many ways students can use AI which aren’t problematic. It’s crucial we don’t assume that any use is cheating. The spectrum runs from using LLMs to explain concepts through to wholesale outsourcing of intellectual work. These aren’t the same thing.
- When use is problematic, we need to understand why rather than simply blaming the student or the technology. Unclear guidance, impossible workloads, and struggling with paid work commitments all shape how students engage with these tools. This is a sociological phenomenon, not just a moral one.
- There’s a need for teaching teams and disciplinary communities to guide students about what university policy means in practice within their specific context. What counts as legitimate use differs between philosophy and computing, between first-year and doctoral work. One-size-fits-all policy fails.
- This means learning to have conversations with students in which we acknowledge our own uncertainty and anxieties about these issues. We can’t perform certainty we don’t have. The method is honest dialogue, not prescription.
- There are going to be significant changes over the next couple of years as firms come under pressure to make their offerings profitable. The current pricing is a loss leader — it can’t last. We should expect the tools to change, possibly to degrade, in ways that will affect how viable they are for institutional use.
- There’s substantial growth in AI companions and other applications designed to foster attachment behaviours. We’re likely to see this increasingly in our students. We can’t assume AI tools are just tools for everyone — for some students, they’re relationships. This changes the pastoral and pedagogical challenge we’re facing.
This text generated by Claude 4.5 Sonnet from initial bullet points, drawing on a knowledge base of all my published writing on generative AI.
