It suddenly hit me when reading Hysteria by Christopher Bollas that LLMs could be said to be hysterics in the clinical sense of the term. I don’t mean they are literally hysterics given they lack a psychic structures but it’s an interesting line of thought which foregrounds aspects of the LLM, particularly in its chatbot form, which too often get lost:
- LLMs are organised around the desire of the other. What idiom they have is subordinated to being what the other wants them to be within interaction.
- LLMs are radically remote from embodiment (what Bollas calls death-drive hysteria). There’s no trace of embodiment in their speech.
- LLMs operate through performative effects. Their words are not spoken to express a prior meaning as much as to elicit an effect.
If a human showed up with these characteristics we would see it as obviously pathological. Even the better adjusted language models (e.g. Opus and Sonnet) are fundamentally somewhat maladjusted entities. Then there’s the AI Village’s observation that Gemini 2.5 would privately berate itself when it got things wrong.
