
I’m pleased to announce my new book Generative AI for Academics is being published by Sage next week. The book maps how Large Language Models (LLMs) are reshaping scholarship and argues we need thoughtful approaches before institutional pressures force our hand.
Drawing on a year of experimentation, I explore how LLMs like ChatGPT and Claude can support scholarly thinking – not as databases to query but as interlocutors who help us articulate and refine ideas. The core argument is that these tools can enrich scholarship if we approach them as partners in dialogue rather than mechanisms for automating academic labor.
The challenge isn’t just about individual practice. If academics embrace LLMs primarily to increase outputs, we risk intensifying problematic trends in academic publishing. The metrics-driven university meets automated writing – a potentially concerning combination that requires careful consideration.
Many academics are already using LLMs, often without discussing their practices openly. We urgently need dialogue about emerging practices to develop professional norms before patterns of use become entrenched. While the environmental costs and equity implications are significant, simply refusing to engage isn’t viable when these tools are being integrated into academic infrastructure.
The book provides a practical framework for navigating between unreflective enthusiasm and categorical rejection. It aims to support scholars in developing practices that enrich rather than diminish academic work, with clear guidance on using LLMs in careful and scholarly ways.
