Anxieties over how generative AI may contribute to radicalisation tend to be related to the production of deep fakes and other forms of deliberate propaganda. This piece by L. M. Sacasas (following widely reported examples of disturbing interactions with Bing’s chatbot this week) has left me thinking back to Heroes: Mass Murder and Suicide; a bleak and speculative book by Franco Beradi exploring the dynamics which account for the spiralling rate of mass murder in contemporary capitalism. If I recall correctly, he suggests there’s a desire to be seen and recognised underpinning all of this. To use the words of Elliot Rogers: “infamy is better than total obscurity”. This is a horrific lens through which to think through the concerns which Sacasas expresses here:
As bad as such emotional experimentation at scale may be, I am more disturbed by how AI chat tools will interact with a person who is already in a fragile psychological state. I have no professional expertise in mental health, only the experience of knowing and loving those who suffer through profound and often crippling depression and anxiety.
In such vulnerable states, it can take so little to tip us into dark and hopeless internal narratives. I care far less about whether an AI is sentient than I do about the fact that in certain states an AI could, bereft of motive or intention, so easily trigger or reinforce the darkest patterns of thought in our own heads.
Frankly, I’ve been deeply unsettled by the thought that someone in a fragile psychological state could have their darkest ideations reinforced by Bing/Sydney or similar AI-powered chatbots. And this is to say nothing of how those tilting toward violence could likewise be goaded into action—a senseless technology mimicking our capacity for sense inducing what we call senseless acts of violence. I would speculate that weaponized chatbots deployed at scale could prove far more adept at radicalization of users than YouTube. What I have seen thus far gives me zero confidence that such risks could be adequately managed.https://theconvivialsociety.substack.com/p/the-prompt-box-is-a-minefield-ai
Imagine private untraceable conversations taking place at scale, in a coordinated way, with fragile people who have been deliberately targeted based on their susceptibility to these interventions. The ideational raw material which could be used to tip them into interaction might vary immensely between the targets, without any necessary relationship to the underlying ideological agenda (if any) of those operating the radicalisation machine. The nature of psychological warfare could undergo a profound and disturbing change in the coming years.