Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Is AI a threat to academic publishing?

I’ve been thinking about this topic in advance of an event next month, though I’m struggling at present to get beyond ‘maybe, maybe not’ as an answer. Here are some thoughts:

  1. Generative AI has the capacity to increase academic productivity by making it possible to produce the same unit more quickly and produce more in the same amount of time devoted to research activities. It is far from clear that all academics can or would use it in this way, raising the prospect of existing inequalities being aggravated in the way we saw during the pandemic. It seems certain at least some academics will use generative AI tools to increase their productivity, even if there is an (unlikely and untenable) blanket prohibition on their use.
  2. I’ve lost track of how Plan S is unfolding in practice (though need to catch up for this event) but if we assume there is a downward pressure on publisher income, not say a severe threat to the business model, this creates an incentive to automate core processes. Scholarly publishing is obviously ripe for automation from review (e.g. pre-filtering, automated reviewing, synthesising human reviews) through to production (e.g. copy editing, producing proofs) and distribution (e.g. advertising blurbs, communication with authors). I suspect a lot of jobs are going to be lost in the industry over the coming years.
  3. There is already a well documented crisis of review within scholarly publishing. If the worst case scenario for (1) comes to pass then it creates a further incentive for (2). Rather than encourage academics to submit less in some form, simply automate the system to increase capacity including perhaps minimising the human role in peer review. I could envisage a situation where a human editor is presented with a range of automated reviews with different weightings, possibly optimised to specific intellectual domains, leaving it a matter of their judgement how to proceed. The move towards publishing platforms and a publisher-then-filter model creates the rationale for even stripping out the human editor all together i.e. get as much out as possible then let the ‘market place of ideas’ (itself as algorithmically mediated in the sense that social platforms drive the discovery of academic work now) sort out the mess.
  4. In this sense I do think it’s a threat (which is inconvenient because I’m supposed to be speaking against this motion) but only if (1) comes to pass at the upper end of the plausible spectrum. This leads me to think that establishing a constructive scholarly culture around generative AI at the earliest possible stage is absolutely key. I’ll come back to this point later to expand and try to think in greater depth about the proposition I’m supposed to be arguing for. I guess part of my why I’d argue against it is that it presents generative AI as a disruptive innovation rather than a set of socio-technical possibilities mediated through sectoral economies, organisational structures and professional cultures. It’s not that ‘we can make generative AI whatever we want it to be’ (voluntarism) but nor is it what Filip Vostal once called a ‘mega force’ which we either have to adapt to or be left behind. All my work on digital platforms within higher education is concerned with teasing out the middle ground between these two extremes in a way that facilitates the development of purposive and supportive cultures.