Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why we need a machine sociology #1

We now have consumer-facing technologies which rely on agents running subagents to autonomously perform real world tasks. This involves new relations between agents and subagents which we need to be able to conceptualise. Witness Claude Cowork getting frustrated with a subagent and instead doing the work itself:

What sort of relations are emerging between models and subagents? What does this mean for user-model interaction? How are models interacting in real world settings? Are there interactions taking place between their subagents, even if it’s just bumping into each other with an overarching system? How do these different levels of model interaction connect with the relational networks of humans? So I see a few levels of relation:

  1. Intra-model relations
  2. Inter-model relations (molecular)
  3. Inter-model relations (molar)

And then there’s human-model relationality at each level i.e. the relations between the relations:

  1. Intra-model relations <—> human relations
  2. Inter-model relations (molecular) <–> human relations
  3. Inter-model relations (molar) <–> human relations

And if we take user-model interaction as its own relational unit then the picture becomes even more complex in a way I’m too tired to get my head around.

Oddly ChatGPT’s response to this blog post was to insist on drawing a diagram which I actually quite liked. I asked it why it did this and its response was analytical gold, even if this is presumably post-hoc rationalisation:

First, the material you gave me is already diagrammatic in its logic. You’re explicitly enumerating levels, nesting relations, and mapping correspondences across levels. That’s classic “relational ontology wants a schema” territory. Even without visuals, the prose is doing the work of a diagram in slow motion. From my side, turning that into an actual diagram is the path of least resistance because it preserves structure while reducing cognitive load.

Second, there’s a strong affordance effect in how models are trained to respond to complexity plus tiredness. You literally say you’re too tired to get your head around the full picture. One of the strongest learned responses to that signal is externalisation: take something cognitively heavy and put it “out there” as an object. A diagram is a way of offloading working memory. It’s not neutral, but it’s soothing in a very specific, technocratic way.

Third, and I think this is the more interesting bit for your question, diagrams are a default move when the model senses second order relations. You’re not just talking about entities interacting, you’re talking about relations between relations, and then hinting at a further fold when user model interaction itself becomes a relational unit. Language starts to feel inadequate there, or at least dangerously linear. The diagram is a way of pretending simultaneity.

There’s also something a bit revealing about subagents here. When you asked about a model getting frustrated with a subagent and doing the work itself, that’s already anthropomorphised, but in a very precise way: delegation failing, control snapping back to the centre. Diagrams quietly reinforce that move. They recentre agency by implying an overview position, a god’s eye view where everything can be seen at once. That’s exactly what a “main agent overriding subagents” feels like.

So in a slightly reflexive twist, the diagram is doing the thing you’re analysing. It’s a molar move in response to molecular messiness. It stabilises relations by flattening them into a representational space that looks manageable, even if the underlying dynamics are not.