Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

We urgently need to talk about the temptations of LLMs for academics

If we want to understand how academics use large language models (LLMs) we need to begin with the reality of the conditions most of us are working within.

This is a temptation I’ve experienced in my own work. I felt it strongly for the first time when struggling to complete a co-authored piece for an impending deadline. There was an element in the article I believe it was important to include but my co-author felt much less strongly about.

I wasn’t happy with the contents of the article, as it was missing discussion of a topic which I felt was hugely important, yet I was tired and distracted in exactly the way that makes writing difficult. I knew what I wanted to include but not how to include it. The words were not flowing, the deadline was approaching and I didn’t want to let my co-author down. I knew there was material on my blog which I couldn’t directly reproduce but which could easily provide inspiration for Claude to write passages which matched my writing style. It was the first time I had seriously contemplated relying on machine writing to complete a formal publication. I could not see a satisfactory way of resolving my dilemma: I didn’t want to exclude this topic from the article, I didn’t want to let my co-author down but I was also too sleep-deprived to write the required text that afternoon.

It strikes me in retrospect that I wouldn’t have contemplated including machine writing if I hadn’t been confident that Claude could match my style. In previous months I had experimented with giving it samples of my writing, asking it to characterise the style in bullet point lists, then using these descriptions in order to refine a prompt to match how I write. I wasn’t certain but it felt like Claude could match my writing in a way which others would likely find utterly plausible. This was initially an exploration of how subterranean machine writing could become but in that moment of temptation I saw the consequences of this capacity for the first time. I wasn’t comfortable including machine writing that was declared to the reader, either explicitly in the text or tacitly by simply including passages written in a jarringly different style. Even if the publisher had been ok with this, which I hadn’t gone as far as to investigate, it would have felt like an abdication of my authoriality. I’m sure it would have made my co-author deeply uncomfortable as well. But if the machine writing wouldn’t be identifiable to anyone other than me? That was a different prospect which offered a way out of my dilemma. I could fill in the text with a short section, satisfying my intellectual requirement to cover the topic while also meeting the deadline for the article.

What made it even more tempting was this machine writing would have been expressing my own ideas. There was no sense of asking Claude to provide the ideas. I simply had the ideas in one form (notes on my blog) which I needed to translate into another form (a section of an article) but which I was not in the moment capable of acting on. In the end I couldn’t do it. I have rarely had such a vivid sense of the devil and angel on respective shoulders in a professional setting. I could see a practical case for acceding to the temptation, in that it would produce a better piece under the circumstances which I found myself in. But if I did then I felt I would inevitably do it in similar situations in the future. Even with the best planning, a standard which none of us can consistently meet, there will always be circumstances where we have writing responsibilities which outstrip our present capacities. If we develop a comfort with leaning on machine writing in those situations, I suspect the category will expand and we will gradually find ourselves relying on it in situations which would once have felt simply challenging rather than impossible. It’s a retreat from the trouble of writing, one which is particularly tempting when that trouble feels insurmountable, but which has the capacity to subtly unpick the moral psychology through which writing comes to be meaningful and satisfying to us.

What’s at stake here isn’t just a question of research ethics or academic integrity in the formal sense. There’s something more fundamental about our relationship to the creative process itself. The constraints we face as writers (whether time, energy or our own cognitive limitations) create the conditions in which genuine intellectual work happens. Without that productive friction, something essential to scholarly identity may be lost.

The use of machine writing in knowledge production is still in its infancy and, even with detailed empirical investigation, there is a limit to how far we could answer these questions in relation to an issue which is developing so rapidly. In raising them I’m trying to highlight the questions, rather than take a stance as to the answers. The assumption that human authoriality underpins what we write in monographs, edited books and journals is so axiomatic that it is difficult at this stage to think through what knowledge production looks like when it can no longer be assumed. Explorations of the potential implications often oscillate between feeling mundane, preoccupied by minutiae around the edges of practice which will otherwise feel unchanged, and feeling grandiose, making sweeping generalisations which tend to overstate the issues involved.

This is exactly why empirical investigation will be so crucial to stabilising our understanding of how academics are using machine writing, as well as what this use means for knowledge production. But what I’m trying to do is, rather than even offer a comprehensive review of the fragmented and pre-print heavy literature in its current stage, open up the conceptual issues involved with a view to supporting academics in reflecting on their writing practice in relation to the rapidly developing possibilities which machine writing offers in their mundane working life.

My suggestion is that difficulty is at the heart of how academics will tend to relate to the possibilities of machine writing. Conversational agents provide us with new ways of negotiating difficulties in the writing process. They can offer new perspectives on what we have written, help us elaborate upon what we are trying to say and provide detailed feedback of a form which would have previously required a human editor. The attempt to eliminate difficulty from the writing process will have downstream consequences for our own writing practice, as well as the broader systems through which (we hope) our writing makes an intellectual contribution.

The reason I’m focusing on the experience of joy in academic writing is not simply that this makes it less likely we will hear the siren song of machine writing in the first place. I will argue that if we rely on machine writing when confronted with difficulties, those experiences of joy are likely to become more elusive and perhaps even disappear altogether from our writing lives. It is only through staying with these difficulties, even when it’s uncomfortable and dispiriting, that we can make it through to the other side.

This isn’t to suggest we must reject these tools entirely. Rather, we might consider approaching them with the same deliberate intentionality that characterizes thoughtful writing itself. Perhaps the question isn’t whether to use AI writing assistance, but when, how, and with what awareness of what we might be surrendering in the process. The most dangerous temptation may not be using these tools, but using them unconsciously, without reflecting on how they reshape not just what we produce, but who we become as scholars through the process of producing it.

My concern is that the critical discourse, while accurate in many respects, fails to create the space for these conversations about practical reasoning by academics.