I’ve been thinking about this topic in advance of an event next month, though I’m struggling at present to get beyond ‘maybe, maybe not’ as an answer. Here are some thoughts:
- Generative AI has the capacity to increase academic productivity by making it possible to produce the same unit more quickly and produce more in the same amount of time devoted to research activities. It is far from clear that all academics can or would use it in this way, raising the prospect of existing inequalities being aggravated in the way we saw during the pandemic. It seems certain at least some academics will use generative AI tools to increase their productivity, even if there is an (unlikely and untenable) blanket prohibition on their use.
- I’ve lost track of how Plan S is unfolding in practice (though need to catch up for this event) but if we assume there is a downward pressure on publisher income, not say a severe threat to the business model, this creates an incentive to automate core processes. Scholarly publishing is obviously ripe for automation from review (e.g. pre-filtering, automated reviewing, synthesising human reviews) through to production (e.g. copy editing, producing proofs) and distribution (e.g. advertising blurbs, communication with authors). I suspect a lot of jobs are going to be lost in the industry over the coming years.
- There is already a well documented crisis of review within scholarly publishing. If the worst case scenario for (1) comes to pass then it creates a further incentive for (2). Rather than encourage academics to submit less in some form, simply automate the system to increase capacity including perhaps minimising the human role in peer review. I could envisage a situation where a human editor is presented with a range of automated reviews with different weightings, possibly optimised to specific intellectual domains, leaving it a matter of their judgement how to proceed. The move towards publishing platforms and a publisher-then-filter model creates the rationale for even stripping out the human editor all together i.e. get as much out as possible then let the ‘market place of ideas’ (itself as algorithmically mediated in the sense that social platforms drive the discovery of academic work now) sort out the mess.
- In this sense I do think it’s a threat (which is inconvenient because I’m supposed to be speaking against this motion) but only if (1) comes to pass at the upper end of the plausible spectrum. This leads me to think that establishing a constructive scholarly culture around generative AI at the earliest possible stage is absolutely key. I’ll come back to this point later to expand and try to think in greater depth about the proposition I’m supposed to be arguing for. I guess part of my why I’d argue against it is that it presents generative AI as a disruptive innovation rather than a set of socio-technical possibilities mediated through sectoral economies, organisational structures and professional cultures. It’s not that ‘we can make generative AI whatever we want it to be’ (voluntarism) but nor is it what Filip Vostal once called a ‘mega force’ which we either have to adapt to or be left behind. All my work on digital platforms within higher education is concerned with teasing out the middle ground between these two extremes in a way that facilitates the development of purposive and supportive cultures.
