Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Finding Joy in the Mud: When and How to Use AI in Academic Writing

There is also a role for persistence in writing. I recognise the risk that building this book around the idea of joyful writing creates the expectation that writing could or should always be joyful. Clearly it won’t be. Our response to those points where we struggle are crucial for shaping how we relate to the potential joy which we might find in future. These experiences of writing don’t exist in isolation. Instead we need to be alive to the interplay between them, how phases of joyful writing inevitably give way to turgid periods of struggling through the mud, which in turn provide the foundation for future experiences of air, light, time and space to use Sword’s (2017) evocative terminology.

If we reach for generative AI while we are stuck in the authorial mud, we’re likely to (mis)use it in ways which simply wouldn’t occur in those periods when the words flow effortlessly from us. I have more than once found myself stuck with a blank page and an impending deadline, fully aware of how readily I could use Claude to generate the missing portion of the text. It would be a generic contribution but if I’m struggling to provide it myself by the deadline then why not use it just this once?

If I was intellectually engaged enough with the prompting to produce a non-generic response from the conversational agent, I would be intellectually engaged enough to write it myself. At least in my experience what I’m calling here being stuck in the mud is a matter of not having anything yet to say. Yet the nature of work means we routinely find ourselves having promised to say something by a particular point in time, which we realise at the most inconvenient moment that we will struggle to fulfil. If we had something to say then we could, in fact we should, say it ourselves rather than relying on the system to do it for us. Furthermore, it would be enjoyable to do so because writing when we have something to say is fundamentally a source of joy.

The problem I’m suggesting comes when we look to a conversational agent to overcome the fact we have nothing to say. This is a problem in its own right because, as discussed in the opening chapter, the influx of automated writing might be a trickle now but it will inevitably become a flood.

This distinction between “having something to say but struggling to express it” versus “having nothing to say” is crucial for understanding when AI assistance might be valuable versus when it might be detrimental. The former situation might benefit from AI as a thinking partner; the latter risks producing empty content that serves only to fill space.

Once conversational agents become a ubiquitous feature of your life world, it means there’s always somewhere to ask questions and get answers. Recognizing this is valuable doesn’t mean you will always endorse the answers you receive. They will often be trivial, meaningless, or miss the mark. But it does mean that the process of asking these questions as and when they occur to you is one which you come to value. For some readers, this might seem like a bleak vision of an automated system replacing the intellectual sociability which we enjoy about academic life. I want to assure you this is not what I’m advocating, as opposed to this being a foundation of intellectual activity which extends and enhances the intellectual sociability we cherish in academic life. Using conversational agents in this way is no more a replacement for intellectual exchange with colleagues and collaborators than thinking to yourself about your own work is a replacement for this interaction.

What matters is the thinking. These are technologies which can facilitate thinking, they can provide supports for it, they can enhance it, and they can help you reflect back upon it. They’re not the only technologies that can do this, but they offer exciting new ways of offering this support. Embracing them in a thoughtful way is about finding a place for this within the ecosystem of your thought, so they can settle alongside other technologies you use for thinking to complement, to extend them, rather than to replace them. What I will argue in this book is that joyfulness is an effective train through which to approach the integration of conversational agents into this ecosystem. If they support the joy we take in writing, which I see as encompassing reading and thinking amongst others, then they are having a positive impact on our intellectual life. If they make this joyfulness more difficult to attain, if they make it feel more precarious, if they leave us hurrying towards ever greater distances of productivity, then they’re having a negative impact in our intellectual life. There’s obviously more to intellectual work than joyfulness, but it can be a guiding principle which can help us make decisions about our thinking practices, and the technologies that we use.

This framing of joy as a criterion for evaluating our relationship with AI writing tools offers a refreshing alternative to discussions focused solely on efficiency, productivity, or ethical boundaries. It suggests that our subjective experience matters: not just what we produce, but how we feel about the process of producing it.

The philosopher of technology Shannon Vallor (2024) argued recently that “AI is the first technology that can make us forget how to answer our own questions”. It certainly can do this but the point I’ve tried to make in this chapter is that exactly the opposite can be the case as well.

When used thoughtfully, conversational agents can help us become more articulate about our own questions, more aware of what we’re really trying to ask. They can function as mirrors that reflect our thinking back to us, often with just enough difference that we see our ideas in a new light. This isn’t so different from the way conversation with colleagues helps clarify our thinking: except that LLMs are always available, never tired or busy, and unencumbered by social expectations.