I argued in a keynote last December that a wave of automation in journal publishing is pretty much inevitable, if academic authors use GAI to increase their rate of publication. There’s a bleak but realistic prospect of AI-written papers being AI-reviewed by journals before being AI-summarised by authors.
The scifi publication Clarkesworld recently closed submissions because they’re struggling to cope with the rate of GAI-written submissions. It’s particularly interesting how GAI influencers are seen to be driving the trend:
Clarke says they’ve seen this problem growing for a while, but they took the time to analyze the data before talking about it publicly. “The reason we’re getting these is a lot of the side-hustle community,” he says. “‘Make money using ChatGPT.’ They’re not science fiction writers—they’re not even writers, for the most part. They’re just people who are trying to make some money on some of these things, and they’re following people who make it sound like they know what they’re doing.” He adds that having seen some of the how-to videos in question, “There’s no way what they’re hawking is going to work.”
https://www.wired.com/story/sci-fi-story-submissions-generative-ai-problem/
Is the academic equivalent people writing guidance about how to use GAI to work more productively? Generative AI for Academics genuinely isn’t this. If anything it’s a book length argument about why you shouldn’t do this, intended to appeal to people who might otherwise be inclined to try. But I’m still worried about the prospect for scholarly publishing over the next few years, given the incentives attached to publication.
