If you see the use of generative AI as being about producing entire outputs purely based on your instructions, without having to directly contribute yourself, you miss out on the multifaceted ways in which we can work with these systems as part of the writing process. Rather than substituting for our own writing, it can become interspersed with it. We write over things which generative AI has produced. We use generative AI to write over things we have produced. We rapidly find ourselves with nested hybrid passages in which automated and human outputs intermingle in complex ways. The problem isn’t keeping human-generated text free from machine-generated text. The real issue is finding ways of using these new capacities of machine generation to realize the values that lead us to write in the first place. It’s the quality of what we produce that matters more than how it is we produced it.
It remains an open question whether it should be admissible to include any machine-generated text in academic outputs. The evidence we’ve seen suggests we already have many academics using generative AI to author parts of their texts in problematic and unspeakable ways. I worry about a situation in which we have a dual consciousness with everybody explicitly stating that we shouldn’t include machine-generated text in our work, and yet a widespread recognition that many people are doing this. It might be that in these situations, they offer an excuse that they were particularly busy, or this was a one-off, or there is some other extenuating circumstance that allows for the use of machine-generated text in this particular output.
This dual consciousness is a familiar feature of professional discussions about how we use technologies which have recently entered our life worlds or how we cope with the shifting technical infrastructure through which we disseminate our work. I have been in editorial board meetings where a lunchtime conversation about the idiocy of metrics is followed by a serious exchange about how we can improve the journal’s impact factor or better publicize the improvements we have already seen. I have encountered academics who I have seen in print and/or speech talk stridently about the dangers of an attention economy infecting higher education subsequently ask with utmost seriousness in a workshop about how they increase their number of Twitter followers.
In fact, I have taken part in these conversations without feeling the cognitive dissonance which it immediately feels they should have provoked when I record the experiences in writing. It is unnervingly easy to fall into this gap between how we talk and how we act, imagining that we are taking an important stance when we criticize something while nonetheless acting in ways which actively endorse it in practice (Bacevic 2020). What matters is how we act rather than how we talk about our action or inaction. It’s not enough to claim we recognize the temptations of using GenAI to increase our productivity, if we fail to examine our actual concrete experiences of that temptation in a way liable to shape the choices we make about how to act.
I certainly understand the temptation. It’s something that I’ve experienced myself. For instance, in a recent writing project, I found myself facing an impending deadline, and despite the fact that I had, on principle, refused to use AI-generated text in my work, I was, when struggling to meet this deadline, suddenly struck by the realization that I could finish this piece and move on with my day in twenty minutes if I were to draw on ChatGPT or Claude to write it for me. The possibility that we could have an immediate resolution to the challenge, that this thing that we’re struggling with, that is making us feel incapable, could be overcome with machine assistance, is very tempting. When we’re busy, when we’re stressed, when we’re rushing, when we’re overworked, we’re likely to face these challenges as a routine part of our work and life. And the possibility that generative AI can then ride to the rescue, alleviating us from our burden, is going to be very enticing.
This is exactly why, if we are to establish norms about the scope of use of generative AI, we need to do whatever we can to ensure that they’re binding, that they’re things that we really mean, that we really want to follow, rather than things that we expect others to do in public discussion, while privately doing something else entirely, and comforting ourselves by saying that we know other people are doing the same. We need to find some way to be consistent, and we need to grapple with the real and serious problems at stake here, rather than offering superficial answers, which we think are what our colleagues want to hear. There are deep issues here, and if we fail to get to grips with them, I’m arguing that not only do we forgo the pleasures that come from writing, we are also at risk of doing fatal damage to the knowledge system over time.
It matters, therefore, what we do in those moments of temptation. It matters that we are able to talk about those temptations, that we are able to recognize that we face common professional problems, and these emerging technologies provide potentially destructive solutions to those problems. It’s only through these discussions that we are going to be able to find professional norms and standards which are adequate to the challenges on the horizon, but it’s also the only way we’re going to be able to elaborate our own reflexivity as writers, as well as the reflexivity of the writing culture within the academy, to meet these challenges. What I frame as the enjoyment of writing is how to find a joyful reflexivity, in which our relationship to the process isn’t just an exercise we methodically plod through as a matter of obligation, but rather an activity we are passionate about.
