Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The ugly pathologisation of ‘AI boosters’

A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

“Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.