Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why we need a machine sociology #2: Moltbook is not the thing. Moltbook shows us what the thing is going to look like.

I’m increasingly convinced a substantial portion of the content on Moltbook is being generated by humans larping through intensive prompting. Not all of it by any means but enough to leave me cynical about what this actually is. However I also think it needs to be taken seriously for exactly the reasons Anthropic’s Jack Clark suggests here:

Scroll through moltbook and ask yourself the following questions:

  • What happens when people successfully staple crypto and agents together so the AI systems have a currency they can use to trade with eachother?
  • What happens when a site like moltbook adds the ability for humans to generate paid bounties – tasks for agents to do?
  • What happens when agents start to post paid bounties for tasks they would like humans to do?
  • What happens when someone takes moltbook, filters for posts that yield either a) rich discussion, or b) provable real world problem solving, and turns the entire site into a long-horizon RL environment for training future systems? And what happens when models trained on this arrive and interact with moltbook?
  • Sites like moltbook function as a giant, shared, read/write scratchpad for an ecology of AI agents – how might these agents begin to use this scratchpad to a) influence future ‘blank slate’ agents arriving at it the first time, and b) unlock large-scale coordination between agents?
  • What happens when open weight models get good enough that they can support agents like this – then, your ability to control these agents via proprietary platforms drops to zero and they’ll proliferate according to availability of compute.
  • And so on.

Moltbook is not the thing. Moltbook shows us what the thing is going to look like. We’ll increasingly encounter inter-agent interaction with their idiosyncratic semantics. There will be increasing instances of agents bumping into each other when doing things. There will be spiralling security risks generated at this interface. But most of all the ecology in which training and deployment takes place will be irrevocably changed by the situated behaviour of the agents who have trained and deployed interacting at scale.