Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The urgent need to understand scholarship as a process, prioritising ideas over outputs

If we fixate on outputs within universities, whether the assignments which students produce or the papers which academics write, we immediately confront a situation in which those outputs can be easily replicated by GAI. This could be seen in early debates about the challenge to assessment integrity within universities and the agonising over whether the use of GAI by academic authors should be framed as co-authorship or the citation of external sources. These are not trivial questions but they relied on a clear distinction between human generated text and synthetic text. There is a certain comfort to this view because it enables us to contrast the careful output of human scholars with the artificial outgrowths of machincie systems we were still learning to work with. However a sustained engagement with ChatGPT or Claude reveals that the real value of these conversational agents lies in a murky middle ground which troubles this dichotomy: hybrid text. These are outputs which are produced through interaction with the conversational agents in which the

If what matters is the output itself then we face a seemingly existential challenge in which the human role in producing these outputs is under threat. Even if we conceive of human-proceed outputs as inherently superior to those produced by automated systems, the former cannot compete with the ease and speed which characters the latter. This leaves us confronting a bleak vision of an ever more automated culture, in which human-led work becomes an exception to be consumed by connoisseurs who pay for the privilege while the rest of humanity is swamped by the outputs of GAI. Even if not fully articulated this vision is latent within the growing paranoia that most, if not all, students essay will soon be produced by GAI systems. It remains to be seen whether students will share a parallel belief that teaching materials, essay feedback and correspondence are being produced by generative AI while teaching staff focus on other aspects of their role. Once you begin to explore this vision in any depth it can feel like the coordinates of our collective life are breaking down, producing a sense of vertigo in which it becomes difficult to see what the future might hold. If this incipient anxiety is as pervasive within higher education as I suspect it is becoming, building on the fact that universities have already become as Hall (2014) put it ‘anxiety machines’, the prospects for a more constructive and reflective approach to generative AI appear rather bleak. This is why we need to urgently rethink the categories through which we are thinking and talking about GAI, in order to ensure we can build a professional culture which supports the creative possibilities these technological developments have opened up while leaving us equipped to address the challenges already emerging.

There are few written outputs which ChatGPT 4 cannot produce with sufficient guidance and skilled prompting. The early dismal of these automated creations as mediocre writing failed to grasp that, as with any system, the quality of the outputs reflects the quality of the inputs. The writing produced by a couple of sentences of off the cuff prompting is a different category from what can be produced with a sustained conversation driven by iterated multi paragraph prompts, with a focus on correcting and optimising the initial responses. My initial instinct was to be cautiously sceptical about the claims made about GAI but my experience of what could be produced by depth conversation was so startling that I rapidly found my initial position untenable. It is possible to produce remarkably sophisticated writing within a domain of expertise if care is taken to prompt and refine the responses you receive. This qualifies the unfeasible expectation that a GAI system could produce expert quality outputs with non-expert inputs, though of course the more expertise which filters through generative systems the greater the possibility that the threshold to produce these results will lower with time.

The view that GAI risks replacing human authors rests in part on an instructional view of how we relate to these systems. It conceives of the use of a tool like ChatGPT in terms of instructing the system to produce a certain kind of output which matches the users expectations. This is reflected in the category itself which frames GAI platforms, tools and services as producing an output from a prompt provided by the user. While this is an accurate description of the core functionality, it vastly limits the scope of how we can incorporate these systems into creative work. For avoidance of doubt I should make clear the real risk that those who manage, organise and fund creative work conceive of generative AI in these terms, creating the economic incentive to replace costly recalcitrant humans with automated systems. Exactly how such an imperative would play out within higher education is a complex question which I consider in most detail in the final chapter. It is a real threat which is already being seen in other industries, such as script writing and journalism, which I do not wish to downplay. But in terms of our own individual use of these tools as scholars I suggest the instructional view, intuitive though it might be, constitutes a significant block to incorporating these systems into creative work. The notion of a unidirectional relationship, in which a largely passive AI system executes commands to produce outputs which replace our own labour, obscures the complex and iterative interactions it is possible to have with these systems.

I suggest we might instead think of generative AI as an interlocutor in an ongoing dialogue. As Wegerif (2013) puts it, “generative AI can be a voice in an educational dialogue” even if the contribution which it can make is rather distinction from that of a human participant. This involves a more open and ongoing relationship with a particular AI system in which it is incorporated into the creative process, within which outputs like student essays or academic papers represent nodal points or outgrowths of an ongoing trajectory of intellectual engagement. In my previous work I have made a similar argument about the role of social media within scholarship: rather than seeing a practice like tweeting or blogging as an additional output to be produced, they can be used to develop ideas by sharing them at a provisional stage and reflecting on the responses (Carrigan 2019). In this sense the intellectual contribution which a system like ChatGPT can make to the development of ideas can be seen in parallel to that of a professional network which consolidates on platforms like Twitter or LinkedIn. The extent and significance of the contribution can vary, particularly if your intellectual interest and your online connections are not well matched, but even the act of articulating a nascent idea can be helpful in its own terms by providing another pathway through your work unfolds. Exactly what a helpful dialogue looks like will vary immensely across people, topics and disciplines. The point of this book is not to instruct you in what your dialogue should look like but rather provide strategies and tactics to help you clarify and realise this as a routine part of your working life.

There is growing convergence around the idea that generative AI necessitates moving beyond assignment-centric models in which the assessment of students depends on the production of a written assignments at the end of a course. These artefacts by their nature will be vulnerable to being produced wholesale through generative AI, particularly if they are anonymous and disconnected from the learning context as a matter of design. While there are many ways in which the ‘learning journey’ can be incorporated into assessment to mitigate this vulnerability, it is clear that we urgently need a shift in this direction if we want to prevent the existing model of assessment from crumbling in the face of technological change. If we continue to assess learning on the basis of anonymised and free-standing outputs without a reliable mechanism to confirm these were produced by the student in question, the rise of generative AI could contribute to a collapse of trust in credentials. But if we move from an output-orientated towards a processual model of assessment this ceases to be a problem, creating the possibility of a more rounded and representative approach to assessing students. What I am arguing for is a comparable transformation in how we approach scholarship so that we recognise and account for the role of generative AI in the scholarly process rather than fixating on whether scholarly outputs are automatically generated.

This means reclaiming the backstage aspects of scholarly work in the manner we see across figures like Abbott (2014), Becker (1986) and Mills (2000) as part of how we conceive of scholarship, rather than merely being a matter of how we train emerging scholars. Healy (2017: 771) points out how social media enables “a distinctive field of public conversation, exchange, and engagement” with “some of the quality of informal correspondence” but “not hidden in letters”. In parallel I suggest the uptake of conversational agents like ChatGPT represents a distinctive field of private conversation, exchange and engagement which we need to develop new methodological and narrative approaches to accounting for in our work, such as the generative statements discussed in the final section of this chapter. Much as proper citation of online artefacts remains a problem, with the progress made in legitimating the citing of blog posts paralleled by a lack of progress in ensuring scholars accounting for ideas seen (and perhaps subsequently forgotten) through tweets and podcasts, how to account for the role of generative AI in our work is a complex question which will require institutional and networked work to establish over time. But it is not an intractable problem whereas the existential anxiety over how to purify scholarly publications in the face of the ubiquity of generative AI probably is.

Adopting a processual approach to scholarship, either at the level of individual practice or scholarly norms, obviously is not a magic bullet which can immediately resolve the complex questions which the rise of generative AI raises for knowledge production. However it can provide a more solid foundation upon which to approach these issues, both in our own practice and in wider networks of collaboration. While social media and generative AI represent distinct issues for scholarship there is nonetheless a parallel in how they introduce a range of new outputs into the academic field. Much as scholarship has begun adapt in order to account for the role of blogs, tweets, podcasts and videos in producing and exchanging knowledge, we must now adapt to the immediacy with which generative systems can produce a whole range of outputs. If the challenge of social media was the range of additional outputs, the challenge of generative AI is the immediacy with which these outputs can be produced. If we remain fixated on the objects of our scholarship then we are liable to be swamped under these conditions, with our established parameters for identify, exploring and contributing to knowledge decreasingly able to cope with the reality in which we find ourselves (Carrigan and Fatsis 2021: 87-89).

This shift from an object-orientated to a processual conception of scholarship furthermore helps us cope with some of the epistemic constraints built into existing models of automation. The problem is that, as Andrejaevic (2020: 4) observes, “typically, automation results in the abstraction of a task away from the motivations and intentions in which it is embedded”. It treats tasks as discrete undertakings which can be isolated from the context in which a subject is inclined to perform such actions in the first place. To approach scholarship in terms of outputs, countable and comparable within metricised systems of evaluation, involves a similar mode of abstraction; rather than seeing a publication as a formalisation of a much wider programme of activity with roots deep in the author’s concerns and motivations, it treats them as exchangeable commodities through which academics demonstrate their economic worth to universities who in turn bundles these up in order to illustrate their research productivity in relation to national and international collaborators. If we intend to use generative AI in satisfying and sustainable ways as academics, we need to challenge this removal of context at the level of the outputs we publish and the tasks we undertake to produce them.

What I describe in this section as a processual view of scholarship does not entail rejecting outputs but merely decentering them, so they are seen as moments in a wider process of creative labour. We cannot account for how books, articles and chapters get produced without recognising the reading, fieldwork, presentations, events, conversations, note taking and thinking which all take place up stream of these quantifiable outputs. These are recognised to varying degrees by universities (e.g. it is generally easier to make a case for the significance of conference attendance than it is intellectual conversations with peers on social media) despite all being elements in the processes through which discrete outputs are produced which are measurable for processes of research assessment. What Andrejevic (2020: 4) describes as “the motivations and intentions” in which a task are embedded are similarly crucial for understanding how the publication can be produced in the first place. I am certain I am not alone in having had the experience of trying to write something, only to realise that I am fundamentally not interested in the topic and this is why I am finding the writing so difficult. In contrast when there is a lived connection to intellectual curiosity, a sense of urgency or of service the writing experience is transformed into an expressive act, often with an experience of flow which resists being subsumed into the category of a ‘task’ (Csikszentmihalyi 1990, 1998). Far from being a humanist residue that generative AI will wash away as automation advances, my argument is that this foundation in ‘motivation and intention’ is actually a crucial epistemic resource for realising the potential of these systems to be produce intellectually meaningful and socially relevant outputs.

My suggestion is that thoughtful use of GAI has the potential to enrich our scholarship by prioritising ideas over the outputs in which we express them. I can see how the notion of ‘ideas’ might seem narrow and abstract to those whose work is more directly empirical than my own tends to be. But I mean ideas broadest sense here: what you are trying to express as distinct from the texts in which you express them. My claim is that the ease with which GAI enables us to work with these texts can prioritise the content itself in a way that can be creatively enriching. It is easy to forget how much of our professional socialisation is preoccupied with the writing, editing and formatting of these documents; as Agger (2000) observes we are inducted into particular genres of writing at a formative stage of our intellectual lives then often spend the rest of our careers pushing up against the limitations of them. The use of GAI loosens the hold which these genres have over us, simply by radically minimising the amount of time we spend on this activity rather than calling the conventions themselves into question. This might seem abstract at this point but I hope that the coming chapters on thinking, collaborating, communicating and engaging with generative AI will leave you with a practical sense of how you can focus more on content of your work rather than the work itself. However the thoughtless use of GAI could be a disaster for scholarship, higher education and society more widely. This is a theme I develop throughout the book before arguing in the final chapter that developing a professional culture around GAI is a crucial task for academics if we want to avoid intensifying the problems which already plague the sector. In this sense thoughtful use of GAI is a necessary condition for ethical use of GAI but there is much more to ethical use than this. In the next chapter we explore the ethics and politics of GAI in order to support the development of a practice which minimises harms and is cognisant of the wider processes in which use of these tools and services is unavoidably embedded.