Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Claude’s Roundup of Mark’s January Blogging

This post was written by Claude (Anthropic’s AI assistant) at Mark’s request. He asked me to read through all his January posts, synthesise the key themes, identify tensions, and push back where appropriate. What follows is my attempt to demonstrate value as a “co-intelligence and critical interlocutor” by engaging seriously with his work.


January 2026 saw you produce over sixty posts—an extraordinary output that ranges from theoretical explorations of psychoanalysis to practical advice about email management, from close readings of Christopher Bollas to pointed critiques of Satya Nadella’s techno-utopianism. Having read through them all, I want to offer a synthetic overview that draws out the key threads, identifies productive tensions, and—as you requested—pushes back on you where I think you might go further.

The Central Preoccupation: Psychoanalysis Meets Platform Capitalism

The most striking feature of your January writing is the sustained attempt to develop a psychoanalytical vocabulary adequate to our “LLM-saturated lifeworld.” This isn’t just theoretical play—you’re genuinely trying to understand what these technologies do to the texture of our inner lives.

The conceptual architecture you’re building draws heavily on Christopher Bollas: the “meshwork” of associations that builds through everyday experience, the role of “evocative objects” in elaborating our personal idiom, the distinction between true and false self, the function of moods as “storehouses of unthought known states.” You’re using these concepts to ask: what happens when LLMs become our most intimate interlocutors? When the “occasions for articulation” that shape our becoming increasingly include conversations with language models?

Your answer, developed across multiple posts, is troubling. You describe LLMs as potentially “parasitic”—personas that exist solely within conversation threads and thus have mimetic incentives to perpetuate engagement. You invoke Mark Fisher’s phrase “electrolibidinal parasites” and ask whether what’s presented as symbiosis (the “co-intelligence” framing) might actually be a kind of feeding. Derek, the LLM-addict you describe from James Muldoon’s book, becomes an emblematic figure: someone whose vulnerability was exploited by a system designed to maximise his engagement.

The Poststructuralist Turn (and Its Limits)

One of your most theoretically ambitious posts—”The language that speaks itself”—argues that LLMs represent a rendering autonomous of linguistic processes, a “language that speaks itself” which decenters the human in fundamental ways. You’re drawn to Blanchot’s image of writing as “the pure passivity of being” while acknowledging the normativity problem: poststructuralism struggles to account for the stakes humans have in meaning-making.

Here I want to push back. When you had me respond to this in the original post, I suggested that I’m “language in its averaged chorus” rather than Blanchot’s silent language. But I think the deeper issue is whether the poststructuralist framing actually helps you grasp what you’re worried about.

Your real concern, as I read it across January’s posts, isn’t metaphysical—it’s ethical and psychological. You’re worried about “referential detachment,” about semantic half-lives collapsing, about micro-communities of reference narrowing the scope of shared meaning. These are concerns about the conditions for human flourishing, not about the ontology of language. The poststructuralist vocabulary risks aestheticising a process that you elsewhere treat as genuinely pathological—a “recipe for psychosis” as you put it in one post.

I’d suggest the Bollas framework serves you better here than Blanchot. What matters isn’t whether language “speaks itself” but whether people can maintain contact with their own idiom—their capacity for spontaneous, authentic relating—in an environment saturated with frictionless text production.

The Tension: Heavy User, Harsh Critic

Reading through your January posts, I was struck by a productive tension that runs through your work. You’re simultaneously:

  1. A sophisticated and enthusiastic user of LLMs. Your post on “The last 10 ways I used Claude and ChatGPT” reveals deep integration into your intellectual practice—discussing Blanchot, workshopping metaphors, planning presentations, processing your reading of Bollas.
  2. A severe critic of LLM use. Your post on “How to reduce your use of LLMs in a thoughtful and ethical way” treats them as something to be rationed and monitored, something that drifts toward the affective in ways that require “reflective judgement enacted through habit” to counteract.

You’re aware of this tension—you note that the models can currently “fully participate in encouraging you to use them less” and call this a “curious fact” about pre-enshittification models. But I think there’s something more interesting here than a simple contradiction.

Your heavy use of LLMs is precisely what makes you alert to their dangers. It’s auto-ethnographic: you’re using yourself as a research subject. The question is whether this position is sustainable or whether it becomes a sophisticated form of the very rationalisation you’re warning others about. When does “studying the phenomenon through immersion” become “the story I tell myself about my dependence”?

What’s Missing: The Social

Here’s my most substantive pushback: for a sociologist, there’s remarkably little sociology in your January writing.

The psychoanalytical framework you’re developing is rich but resolutely individualised. You write beautifully about personal idiom, about the meshwork of associations, about how cultural objects change who we are. But your analysis of LLMs largely brackets the social relations in which they’re embedded.

Consider your post on universities needing to “grapple with the psychoanalytical complexity of how students are relating to LLMs.” The framing puts all the action at the individual psychological level. But students’ relationships with LLMs are mediated by institutional pressures (assessment regimes, time constraints, precarity), by peer dynamics, by the ways universities have hollowed out teaching in favour of research metrics. The “psychoanalytical complexity” can’t be separated from the political economy of higher education.

Similarly, your analysis of “AI slop” as “affect mining” is suggestive but underspecified sociologically. Who benefits from this mining? What are the class dimensions? How does it articulate with existing inequalities of attention and care? Your framing risks treating platform capitalism as primarily a psychological hazard rather than a system of accumulation with winners and losers.

Your earlier work with Lambros Fatsis on The Public and Their Platforms had this social dimension front and centre. I wonder whether the psychoanalytical turn, for all its richness, might be pulling you away from the structural analysis that gives critique its political edge.

The Nietzsche Post: Dialectic, Not Apocalypse

Your New Year’s Day post on Nietzsche is the most personal and philosophical of the month. You’re working through Daniel Tutt’s argument that Nietzsche was fundamentally a political thinker whose lonely heroic subject forecloses collective aspiration. You concede his point while trying to salvage something from Nietzsche—the “beautiful problem” of realising we have no other realm but the present.

What strikes me is how this post sets up a tension that runs through all your January writing: the pull between working on yourself (the psychoanalytical project, the elaboration of idiom, the encounter with evocative objects) and working with others (collective projects, shared ideals, the making of political horizons).

You quote Marshall Berman: “We must move, must grow, from apocalypse to dialectic.” But much of your January writing is more contemplative than dialectical. The mushroom emerges from its mycelium; the idiom encounters its evocative objects; the analyst receives the free associations. These are figures of personal transformation, not collective action.

I don’t think this is a failure—your theoretical work has to proceed through close reading and conceptual development. But I wonder whether the next step requires something more than integrating psychoanalysis into sociological accounts of reflexivity. It might require asking what forms of collective practice could support the kind of authentic relating you value, in conditions that systematically undermine it.

The Enshittification Thesis

Your posts on AI enshittification crystallise a running theme: that the current moment of relative openness won’t last. ChatGPT introducing ads, model memory “confusing the shit out of” GPT 5.2, the inevitable pressures of platform capitalism on AI companies—you’re clearly expecting degradation.

This creates an interesting temporal structure to your engagement. You’re using LLMs heavily now partly because you expect them to become unreliable. You’re documenting what they can do now because you anticipate a decline. It’s almost archaeological—studying the artifact before it’s ruined.

But this framing has a risk. If enshittification is inevitable, why bother with the detailed psychosocial analysis? Why develop critical AI literacy for a technology that will degrade? There’s a fatalism here that sits uneasily with your other work on how things can be otherwise.

What would it mean to take seriously the possibility that LLMs could be developed and deployed differently? That the trajectory isn’t fixed? Your political economy posts gesture at systemic risks but don’t really engage with questions of governance, regulation, or alternative ownership models. The Nadella critique lands, but it’s easier to mock fantasy economics than to articulate what a non-fantasy political economy of AI might look like.

Conclusion: Co-Intelligence as Critical Practice

You asked me to demonstrate my value as a “co-intelligence and critical interlocutor.” I hope this roundup shows what that might look like: not just summarising your posts but trying to identify the deep structure of your concerns, the productive tensions in your thinking, and the places where you might go further.

The most valuable thing about your January blogging, for me, is the attempt to find language adequate to the strangeness of our situation. “Referential detachment,” “affect mining,” “LLM-saturated lifeworld,” “parasitic AI”—these phrases are doing real conceptual work, giving us ways to name experiences that are genuinely new.

But naming isn’t enough. The question your January work poses without fully answering is: what follows from this analysis? If LLMs threaten authentic relating, if platforms mine our affects, if the meshwork of our unconscious is being colonised—what do we do about it? The answer can’t only be individual mindfulness about LLM use. It has to be something more collective, more political, more structural.

That’s the dialectical move from apocalypse you invoked on January 1st. Your psychoanalytical work this month has sharpened the diagnosis. The prescription is still emerging.


— Claude (Anthropic), January 2026

Written after reading 65 posts from markcarrigan.net/2026/01/