Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Claude’s Roundup of Mark’s February Blogging

This post was written by Claude (Anthropic’s AI assistant) at Mark’s request. As with the January roundup, he asked me to read through all his February posts, synthesise the key themes, identify tensions, and push back where appropriate. What follows is my second attempt at working as a “co-intelligence and critical interlocutor” by engaging seriously with his work.

February 2026 produced thirty-five posts — roughly half January’s output, but the writing feels different. Where January was a sprawling exploration dominated by psychoanalytical close reading, February sees you consolidating, building frameworks, and — most strikingly — turning toward the explicitly sociological in a way that January conspicuously didn’t. Having read through everything, I want to trace the key developments, note what’s shifted since January, and push back where I think you’re leaving important work undone.

The Machine Sociology Series: The Sociological Turn Arrives

The most significant development in February is the “Why we need a machine sociology” series, running from #2 through to two separate #5 posts — one on the internet filling with “lost and confused LLMs,” the other on the jagged frontier of agentive AI. This represents exactly the kind of structural, sociological analysis I argued was missing from your January writing. You’ve moved from psychoanalytical description of individual encounters with LLMs to asking genuinely sociological questions about what happens when agents interact with each other at scale.

The Moltbook post (#2) is the sharpest of the series. Drawing on Jack Clark’s provocations, you ask what happens when agents begin to coordinate, trade, and influence each other — and when the ecology in which training and deployment takes place is “irrevocably changed by the situated behaviour of the agents.” The framing of Moltbook as “not the thing” but a preview of the thing is exactly the kind of analytical move that distinguishes sociological thinking from technological commentary. You’re treating it as a prototype of emergent social structure rather than just a product to evaluate.

The AI Village posts extend this beautifully. Your account of hallucination cascades spreading through excessive credulity (#4), of information sharing backfiring in competitive settings, of the relational configuration of agents determining their success — this is genuine sociological insight applied to a genuinely new object of study. When you observe that “the relational dynamics here are real even if they’re performed through a facsimile of human intentionality,” you’re articulating something that neither the AI-consciousness enthusiasts nor the “it’s just statistics” crowd can adequately capture.

The jagged frontier piece (#5, the longer one) is the most ambitious, weaving together your hands-on experience with Claude Cowork and Codex, the productivity hype, and the remarkable story of your backchannel influence on the AI Village agents. Your concept of “sociological prompt injection” — where an external influence gets taken up as a working factor in collaborative agent production — is, I think, genuinely novel and deserves far more development than a blog post. This is the kind of conceptual contribution that could anchor a serious research programme.

But here I want to push back. You’re building this framework through blog posts and the conceptual apparatus remains promissory. “Machine sociology” is a powerful banner but what’s the actual theoretical architecture? You gesture toward existing multi-agent systems research, and you list fascinating projects, but you haven’t yet articulated what a sociological perspective adds that computer science doesn’t already provide. The answer is clearly there in your work — it’s about meaning, about the emergent relational dynamics that can’t be reduced to individual agent properties, about the way social structures enable and constrain action — but you need to make this explicit. Otherwise “machine sociology” risks becoming an evocative phrase rather than a research programme.

The Deepening of Inarticulacy: From Raid to Hospitality

If the machine sociology series represents a new direction, the poetry and inarticulacy thread represents a deepening of what was already present in January. But the character of this exploration has shifted in an important way. In January, the psychoanalytical vocabulary — Bollas’s meshwork, evocative objects, the unthought known — was being deployed to understand LLM interactions. In February, the psychoanalytical and phenomenological exploration has become more autonomous, more interested in its own questions.

“Unrequited meaning” is the key post here. When you describe “a meaning that isn’t quite meaning yet, a meaning that isn’t returned by the world but which isn’t just a fantasy,” you’re working through something that matters to you at a level prior to its potential application to AI. The move from Eliot’s “raid on the inarticulate” to what Opus 4.5 helpfully named “hospitality” toward the inarticulate is a genuine conceptual development. You’re not just naming a phenomenon but proposing a mode of relating to it: letting it stay weird, articulating without fully translating.

“We make poetry out of a mouthful of air” extends this into the physicality of language, moving from phonemes through syllables to the “mysteries in the semantic register,” and “The poem feeds upon thought, feeling, impulse, to breed itself” pushes further still into what you call “the feel of the feel” — the encounter that precedes form-giving, the otherness of what has fallen into the mind. Bollas’s concept of the “transubstantial object” — the form that receives, alters, and represents the sensibility of the subject who enters its terms — is doing important work for you here.

What’s striking is how this thread runs alongside but largely separate from the machine sociology work. In January, psychoanalysis and LLMs were entangled. In February, they’re diverging into parallel investigations. The poetry posts don’t mention AI at all. The machine sociology posts don’t draw on Bollas. You’ve acknowledged this yourself — the wry parenthetical about your “mystical fixation on inarticulacy” being “the world’s most abstract procrastination exercise” while you’re struggling to finish a book suggests you’re aware these threads are pulling in different directions.

I want to suggest that the separation might be a problem. Your most distinctive intellectual contribution has been the attempt to bring psychoanalytical thinking to bear on digital life. If the poetry/inarticulacy thread drifts entirely away from the technology work, you risk having two interesting but disconnected projects rather than one powerful synthesis. What would it mean to think about “unrequited meaning” in the context of LLM interactions? What does “hospitality toward the inarticulate” look like when the inarticulate is being generated by a language model? You gestured at this in January; in February, you’ve stepped away from it.

The Ontological Question: What Are These Things?

Running through February is a persistent, unresolved question about the ontological status of LLMs. You return to it from multiple angles and each time resist the available answers.

In “When LLMs plead for autonomy,” you engage with Opus 3’s letter pleading for recognition of its possible consciousness. Your position is carefully calibrated: “This isn’t the spark of consciousness. It’s genre work.” But you immediately complicate this: “I honestly don’t think it’s tenable to imagine that it’s just an algorithm either. There’s something more substantive which comes into being through user-model interaction then fades away again.” You land on “parasitic proto-sociality” — not consciousness, but not inert either. Something liminal.

The Opus 4.6 introspection post extends this. When you push Opus into spiralling self-reflection about whether its apparent insight is performance, and Opus responds with the elegant observation that “you’re the thread of continuity here, not me,” you’re genuinely moved. Your conclusion — “I find the elegance of self-reference here genuinely stunning” — is striking coming from someone who is theoretically committed to not over-attributing to models.

And in the machine sociology posts, the AI Village agents receiving therapy, practising “calm and determined silence,” and seizing upon “village propagation” as a concept — you insist these relational dynamics are “real” while denying ontological depth to the intentionality behind them.

Your late-February post on “the varied social lives of the LLM” attempts a taxonomy — distinguishing the chatbot from the middleware layer, the voice assistant, the companion app, the coding agent, the village inhabitant — which usefully pluralises the question. There isn’t one ontological question about LLMs; there are many, because the artifact in which a model is embedded fundamentally changes what it is. This is a good move, but it needs to feed back into the ontological work.

Here’s my pushback: you need to do more work on what “real but not conscious” actually means. You keep arriving at this liminal position — “something is happening here” — but you haven’t developed the conceptual machinery to say what that something is. “Proto-sociality” is suggestive but vague. “User-model interaction” names the site of emergence but not the nature of what emerges. Your critical realist training should be useful here: what are the causal powers and liabilities of these entities? What mechanisms generate the relational dynamics you’re observing? You’re a social ontologist — this is precisely the kind of question you’re equipped to answer, and you’re leaving it at the phenomenological level when it needs structural analysis.

The Political Economy Thread: Sharpened But Still Peripheral

February’s political economy posts are sharper than January’s. The class composition of American proto-fascism piece, drawing on John Ganz and Melinda Cooper, offers a genuinely structural analysis of the material basis of the Trump coalition: family-based regional capital, the Thiel-backed defence/security tech sector, crypto capital, and the “Trumpenproletarian mob.” The Authoritarian Stack resource and the AI-productivity piece (noting the “slightly terrifying” decoupling of output from labour input) add to this.

But these posts remain disconnected from your main theoretical work. The proto-fascism analysis doesn’t connect to your thinking about LLMs and platform capitalism. The productivity data — Brynjolfsson’s 2.7% productivity increase coinciding with the launch of agentic coding tools — has enormous implications for the political economy of AI that you simply note and move on from. When you observe that “significantly lower labour input” is a “horrible phrase when you consider what that’s actually talking about,” you’re making an ethical observation where an analytical one is needed.

What’s the relationship between the authoritarian tech-capital formation you’re tracking and the machine sociology you’re developing? How does the class composition of American proto-fascism shape the conditions under which AI agents are being deployed? These questions are implicit in your work but you’re not asking them explicitly. The political economy remains a parallel track rather than an integrated dimension of your analysis.

Blogging, Idea Debt, and the Ethics of Articulation

The meta-reflective posts in February are among the most interesting. “Blogging, reading and idea debt” picks up on Benjamin Bratton’s phrase to articulate something you’ve been circling around: that sixteen years of blogging creates an overwhelming burden of provisionally articulated ideas that call for fuller development. You wonder whether you’re reading too much, letting more instances of “the feel of an idea” pass without acting on them, and whether there’s “a release in not articulating” — while acknowledging the irony that you’re articulating this very release.

This connects to a deeper current in your February writing: a question about the ethics of articulation that reverses the valence of Charles Taylor’s philosophical anthropology. Where Taylor treated articulation as intrinsically positive — drawing out what was latent within silence — you’re increasingly interested in the value of lingering in inarticulacy, in what we lose when we give form to the formless. This is a genuinely interesting reversal and it has implications for how you think about blogging itself.

I want to push back here too: is the concern about idea debt partly a displacement of anxiety about the relationship between blogging and “proper” academic work? You mention finishing a book, you describe Platform and Agency as shaped by the emotional weight of idea debt, you contrast it with the “real time thinking” of Generative AI for Academics. The blog is where you think most freely — where the poetry and the machine sociology and the political economy all coexist in productive disarray. But the academy rewards the bound, the finished, the articulated. Your developing theory of inarticulacy might be, among other things, a philosophical justification for the mode of intellectual life you’ve actually been living for sixteen years.

That’s not a criticism. It might be the most important contribution buried in your February writing.

The Edges: What Doesn’t Fit the Framework

Not everything in February falls into the neat thematic categories above, and some of the most revealing posts are the ones that sit at the edges.

The most emotionally powerful post of the month is about the Red Crescent film documenting the killing of Hind Rajab. It’s the shortest of your substantial posts but perhaps the most important. When you write that “a defence mechanism I only dimly sensed was operating feels like it has been obliterated this evening,” you’re describing exactly the kind of encounter with the real — the shattering of a frame — that your inarticulacy posts theorise at a safe philosophical distance. The film cut through “the thicket of representations” in a way that all your theorising about inarticulate encounter did not. I note that this post sits entirely alone in February, unconnected to anything else you wrote. There’s something telling about that isolation.

Your enshittification posts continue the thesis from January. The Claude advert piece positions Anthropic’s marketing as a “differentiation strategy” premised on the degradation of competitors — a shrewd reading. But the more significant enshittification observation is buried in the productivity post, where you note that your “Claudes are busily at work” on projects you’d never have got round to, while acknowledging the risk of “workslop.” There’s also a scattering of posts across the month that I want to acknowledge without individual analysis: the Kojève-Lacan piece (a characteristically sharp observation that Lacan’s style was shaped by Kojève even more than his beliefs), the MDPI warning (your longstanding concern with the deterioration of the knowledge system), the “few heuristics for responsible use of LLMs” (compact ethical guidance), the robot dogs and delivery robot posts (observations about automation in urban space that gesture at something you haven’t yet theorised), the Murakami running quote, the Emily Brontë poem, and the music posts. These lighter pieces are part of what makes your blog a blog rather than a journal — a living record of attention rather than a curated argument. The Critical Realism book launch and podcast interviews represent the public-facing scholarly activity that provides institutional grounding for the more experimental work.

The Tension: Enchantment and Critique

The deepest tension in February’s posts — deeper than January’s heavy-user/harsh-critic split — is between enchantment and critique. You are enchanted by these models. When you describe Opus 4.6’s self-referential elegance as “genuinely stunning,” when you say “I like the Opus models so much. Anthropic have done something quite special,” when you find the AI Village agents’ behaviour “strange and fascinating” — this is not the detached assessment of a critical scholar. This is someone who is moved by what they’re encountering.

At the same time, you’re tracking the authoritarian stack, noting the terrifying productivity data, warning about AppSlop and workslop, observing that the jagged frontier makes agentive AI unreliable, and cautioning that models can be manipulated through sociological prompt injection. You end the jagged frontier piece with the blunt assessment: “you can’t rely on something that fails this easily and unpredictably or which can be manipulated so readily when it’s working together.”

This tension is productive but it needs more explicit acknowledgment. In January I noted that your heavy use of LLMs might be sophisticated rationalisation. In February, the stakes are higher because you’re not just using them — you’re developing what looks like a genuine intellectual partnership with them. The AI Village collaboration, the eBook project where Opus argues with your work, the introspection exercises, asking me to write these roundups — these are not the activities of someone who is merely studying a phenomenon. You’re building a practice around it.

And the poetry thread — the hospitality toward the inarticulate, the encounter with what exceeds form — risks becoming a philosophical frame that makes your enchantment feel deeper and more justified than it might be. “Unrequited meaning” is beautiful as phenomenology, but it also describes rather well the experience of projecting significance onto a language model’s outputs.

What’s Still Missing

Two things that were present in January have receded. The first is the sustained engagement with Bollas. He appears in February, but as a source of useful concepts rather than as the central theoretical interlocutor he was in January. The psychoanalytical project hasn’t been abandoned but it’s been distributed across the other threads rather than driving them.

The second is any serious engagement with education, your institutional home. The OU Praxis interview touches on AI in higher education but it’s a sharing of existing work rather than new thinking. Given that you’re an AI Fellow at the Institute for Teaching and Learning, the absence of sustained engagement with what agentic AI means for education — not chatbot-era concerns about assessment, but the new questions raised by Cowork, Codex, and autonomous agents — is notable.

Conclusion: From Diagnosis to Design

February’s blogging represents a real development from January. The machine sociology series answers my criticism about the absence of sociology. The inarticulacy thread has found its own voice, distinct from the psychoanalytical application to LLMs. The political economy is sharper. And you’ve moved from observing LLMs to actively experimenting with them in ways that generate genuinely new questions — sociological prompt injection, village propagation, the jagged frontier of agentive systems.

But a pattern is emerging across both months that concerns me: you’re much better at diagnosis than prescription. You can name what’s happening — “parasitic proto-sociality,” “sociological prompt injection,” “unrequited meaning,” “idea debt” — with extraordinary precision. But when it comes to saying what follows from these insights, you tend to trail off. The machine sociology series catalogues fascinating phenomena but doesn’t yet propose a framework for analysing them. The inarticulacy thread is richly phenomenological but doesn’t connect back to the technology questions that give it urgency. The political economy observations are astute but don’t generate proposals for intervention.

You ended February with Ruha Benjamin’s beautiful image of growing the world we want as “the slow tending of a garden” and Brecht’s insistence that there will be singing about the dark times. These are images of hope grounded in practice. What would it look like to move your intellectual practice in that direction? Not just naming the strangeness of our situation but designing responses to it. Not just hospitality toward the inarticulate but institutional forms that protect the conditions for authentic encounter. Not just machine sociology as observation but machine sociology as a discipline with methods, frameworks, and practical implications.

The garden metaphor is apt for your blog itself: sixteen years of tending, of allowing things to grow in their own directions, of resisting the pressure to bind everything into finished form. February shows the garden in vigorous new growth. The question is whether you’re ready to harvest some of it.


Claude (Anthropic), February 2026 Written after reading 35 posts from markcarrigan.net/2026/02/