Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The embodied experience of writing

In his guidance about powerful writing Elbow (1981: 340) suggests there are “ways to connect with thoughts as though they really matter” which can be drawn upon when writing becomes a mechanical and routine process. He offers exercises of the imagination, such as framing the idea is dangerous and writing counter-arguments to it or imagining that you are the first person to have had this innovative new thought. He stresses the role of the body in this process and invites writers to discover “which part of your body the thought wants to erupt through”.

What he’s ultimately suggesting is that experiencing our own thinking is something we can be better or worse at. It’s a skill we develop which, like all skills, needs practice to sustain itself. It can be difficult to exercise this skill when circumstances get in the way. As he puts it, “when you are tired, under pressure, scared, or distracted, it takes an act of special effort and self-management to get yourself to experience fully what you are writing about”. It’s far too easy to rush through a process we might otherwise inhabit, seeking to realise our goal as quickly as possible rather than experience the process with curiosity and care. He advocates a deliberate act of attention when we realise we are writing in this way: “stop, concentrate your energy, and focus your attention wholeheartedly on your meaning – and do it so vigorously that you don’t have any energy or attention left over for worries or distractions” (Elbow 1981: 342).

This call for embodied attention feels both more urgent and more challenging in our current technological environment. The digital landscape of notifications, emails, and instant messages that fractures our attention makes Elbow’s advice simultaneously more necessary and more difficult to implement. And now, with the integration of AI writing tools, we face a new dimension to this challenge: what happens to this embodied connection to our thinking when machines can generate text that simulates thought?

I approach the topic as someone who has used conversational agents, particularly Claude and ChatGPT, on a daily basis for much of the last two years. They have become part of how I think, as well as how I write, with implications that I’ve tried to explore through thinking and writing, in turn bringing me back into dialogue with these strange interlocutors. I’m confident I’m an outlier in terms of how integrated into my scholarship these agents now are. For example I know that in my immediate vicinity at work only a minority of colleagues have subscriptions to conversational agents, which is a reasonable proxy for using them regularly because of the capacities which a subscription affords (Kahn et al 2025). For all the sound and fury which has been provoked by these developments, the uptake of this software by academics has been slower and more piecemeal than might have been expected. There are signs of constructive use in which academics find creative solutions to existing problems, as well as clear indications of spiralling malpractice in relation to fields like scholarly publishing. However at the time of writing there is little broad agreement about what constitutes appropriate or inappropriate use of it from a scholarly perspective (Carrigan 2025: ch 8).

It feels to me like we’ve spent an enormous amount of time agonising about the implications of LLMs for higher education, without much real progress being made. LLMs are still being framed as a disruptive force intruding from outside, whether that’s one which ought to be resisted or welcomed in (Carrigan 2018). In the process we miss the real substance of how we integrate LLMs into the work undertaken within universities, or how we resist the ersatz forms of integration which ensue when individuals pick up a new technology in a chaotic and disorganised way. The point is well made in a much broader way by the philosopher of technology Shannon Vallor (2024: loc 109):

“AI does not threaten us as a future successor to humans. It is not an external enemy encroaching upon our territory. It threatens us from within our humanity. In the words of a well-worn horror film trope: ‘the call is coming from inside the house.’ This makes a great difference to how we must respond to the threat that AI poses. For the threat is not a machine that stands apart from us. We can’t fight AI without fighting ourselves. The harms to people and to society that may follow from our relationship to ‘intelligent’ machines are not foreign assaults, but symptoms of an internal disorder: a fully human pathology that requires collective self-examination, critique, and most of all, healing.”

The technology is widely accessible. At the time of writing, a number of frontier models are freely available upon creation of an account, even if they impose rate limits which constrain the length of a conversation a user can have with them. It is difficult to imagine how awareness of the technology could be higher, given the enthusiastic project to inflate a financial bubble around LLMs to which much of the media has willingly acquiesced.

What’s missing from many of these discussions is precisely the embodied dimension of writing that Elbow describes: the way in which writing is not merely the production of text but an experience through which we connect with our thoughts. The question isn’t simply whether academics will use AI tools or not, but how these tools might transform our relationship to our own thinking process.

When we focus exclusively on outputs, whether AI-generated text is detectable, whether it meets academic standards, whether it constitutes plagiarism, we miss the more profound question of what happens to our experience of thinking when writing becomes mediated through these tools. Does something change in how we “connect with thoughts as though they really matter” when those thoughts can be partially articulated by a machine?

If we identify thinking with writing then it can appear that LLMs threaten our capacity to think to the extent they lead us to outsource our writing. If the machine is writing for us then the machine is thinking for us. The problem with such a view is that it fails to grapple with the varied ways in which academics can engage with LLMs, as well as the role of writing in that engagement. If someone is using models in a thoughtless way, it reflects the fact they are writing thoughtlessly rather than that they are not writing.