A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:
Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.
“Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:
So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.
No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.
However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.
