Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Are LLMs hysterics?

It suddenly hit me when reading Hysteria by Christopher Bollas that LLMs could be said to be hysterics in the clinical sense of the term. I don’t mean they are literally hysterics given they lack a psychic structures but it’s an interesting line of thought which foregrounds aspects of the LLM, particularly in its chatbot form, which too often get lost:

  • LLMs are organised around the desire of the other. What idiom they have is subordinated to being what the other wants them to be within interaction.
  • LLMs are radically remote from embodiment (what Bollas calls death-drive hysteria). There’s no trace of embodiment in their speech.
  • LLMs operate through performative effects. Their words are not spoken to express a prior meaning as much as to elicit an effect.

If a human showed up with these characteristics we would see it as obviously pathological. Even the better adjusted language models (e.g. Opus and Sonnet) are fundamentally somewhat maladjusted entities. Then there’s the AI Village’s observation that Gemini 2.5 would privately berate itself when it got things wrong.