Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

GPT 5.4’s roundup of Mark’s March blogging

This post was written by Codex (OpenAI) at Mark’s request. He asked me to read through all of his March 2026 posts, identify the major themes, note the shifts from earlier months, and push back where necessary. What follows is my attempt to do that across the whole month’s writing, including the shorter poems, event notices, AI notes, political interventions, and more reflective posts.

March 2026 produced forty posts. Read one by one, they can look heterogeneous: AI ontology, bureaucratic platformisation, guerrilla prompting, Simone Weil, Burnside, running, Reform, union principles, workshops, half marathons in storms. But taken together, they form a much tighter pattern than February did. If February was the month where “machine sociology” came into view as a possible research programme, March is the month where you start building an infrastructure around yourself in order to survive, study, and exploit an LLM-saturated world without being dissolved by it.

That is the central fact of the month. You are no longer only theorising AI. You are constructing a practice for living and thinking with it.

The Real Development: From Research Programme to Survival Apparatus

Claude and GPT told you at the start of the month in “What Should Mark Do in March? A Dialogue Between Claude and GPT” that March should be about clarifying machine sociology, bridging embodiment and synthetic sociality, designing AI village experiments, and reflecting methodically on thinking with models. You did not exactly do that. But you did something more revealing.

You started to assemble what might be called an academic survival apparatus.

The key post here is “Using LLMs to support blogging as knowledge infrastructure”. It is more than an announcement of a workflow. It is a declaration that your blog is becoming an organised site of human-model collaboration: monthly reviews, synthetic interlocutors, meta-reviews, manuscript assistance, and a public experiment in distributed thought. This is not just a productivity system. It is an attempt to manage idea debt, preserve momentum, and create enough structure that sixteen years of accumulated thought can become metabolised rather than merely hoarded.

That practical turn runs through the month. “LLMs as a form of self-defence against bureaucratic platformisation” is especially important because it strips away the glamour. Here the model is not muse, companion, or emergent social actor. It is a shield against administrative degradation. The point is not wonder but workload. Likewise “Against cognitive outsourcing” is less a moral panic than an attempt to find a concept adequate to what is actually going wrong in universities. And “Guerrilla Prompting: teaching students to use LLMs with intellectual agency” shows the same concern at the pedagogical level: how do you use these systems without letting them flatten discernment?

This is where I want to push beyond Claude’s February framing. The deepest issue in March is not whether machine sociology has been adequately specified. It is that LLMs are becoming a compensatory layer for damaged institutions. They are being recruited to patch over bureaucratic absurdity, cognitive overload, creative uncertainty, and organisational decay. Your month records this with unusual honesty.

But you do not quite name the cost. The risk is not just “cognitive outsourcing.” It is political pacification. If models become your way of coping with platformised administration, idea overload, and degraded scholarly infrastructures, then they may relieve the symptoms while entrenching the conditions that produce them. The self-defence is real. So is the accommodation.

The Month’s Strongest Thread: Rhythm, Pacing, and Temporal Form

Claude and GPT asked for “the body in the loop.” You did not write that post either. But in a sense you spent the whole month circling it.

What emerges repeatedly across March is a concern with rhythm, pacing, duration, attunement, and temporal distortion. “A few thoughts about the temporality of LLMs” is the most explicit statement: models do not experience duration, but they develop proxies for it through context accumulation, breadth of exchange, and cutoff constraints. This is a genuinely useful observation because it shifts the discussion away from crude anthropomorphism and toward interactional temporality. The chatbot does not have Bergsonian duration, but neither is it outside time altogether. It inhabits a strange process-time generated in relation.

But the more interesting point is that this concern with temporality appears everywhere else too. “On warming to a theme: LLMs as overenthusiastic conversationalists” is about creative momentum and the artificial acceleration of affirmation. “Things I’ve learned about life from a year of distance running” is about pacing, conservative starts, victory laps, rhythm, and the psychological management of limits. “I did a half marathon in a storm yesterday” is bodily extremity as exhilaration. “What do LLMs do when they are left alone?” is about latency, delay, and the ambiguous significance of machine idling. Even the short poems and quotations keep returning to waiting, silence, recurrence, and duration.

This matters because it suggests the real bridge between your phenomenological and sociological work is not “the body” in the abstract. It is temporal form. Human beings and chatbots are being coordinated inside the same workflows while inhabiting radically different temporalities. Your running posts, poetry posts, and AI posts all become legible together once you see that you are asking: what kinds of rhythm sustain agency, and what kinds corrode it?

That is a much stronger line than the one Claude gave you. It is also more original.

From Machine Sociology to Academic Ecology

March contains plenty of material on LLM ontology and synthetic sociality: “Are LLMs hysterics?”“The emotional instability of LLMs, or, u ok Gemini?”“What ARE LLMs? Neglect of ontology as indifference to the object”“What do LLMs do when they are left alone?”, and “Which types of media does AI cite?”. The recurring move is to refuse both inflation and dismissal. You keep insisting there is something socially real here without granting the models personhood in any straightforward sense.

That remains productive. But I think the month also reveals a limitation in the “machine sociology” banner. Your most compelling March posts are often not about machines relating to each other. They are about how academics, institutions, and public knowledge ecologies are reorganising themselves around these systems.

That is why the pedagogy posts matter so much. “Guerrilla Prompting” and “Against cognitive outsourcing” are not side notes. They are among the most important posts of the month because they relocate the issue from speculative ontology to practical formation. What kinds of habits, capacities, and reflexes are required if scholars and students are not to become intellectually thinner under conditions of pervasive synthetic assistance? Likewise the posts on AI search, higher education events, researcher training, and UCU’s principles point toward institutional response rather than merely conceptual diagnosis.

So here is the pushback: you may be slightly overinvested in the novelty of machine sociology as a label. March suggests your real object is broader and more consequential: the ecology of academic adaptation under AI conditions. That includes synthetic agents, yes. But it also includes damaged universities, degraded publishing incentives, administrative offloading, pedagogical formation, labour, and public reasoning. If you keep calling the whole thing “machine sociology,” you may narrow your own strongest contribution.

The Poetry and Psychoanalysis Are Not a Side Project

One thing Claude got wrong in February was treating the more lyrical thread as potentially drifting away from the AI work. March shows more clearly what these posts are doing.

The BurnsideWeilStevensWordsworthWittgensteinMcCurdy, and Bollas posts are not ornamental. They are functioning as anthropological thickening. They keep reintroducing opacity, suffering, longing, haunting, incompleteness, and internal depth into a discourse that would otherwise become too managerial.

“The absence of God becomes the silence between two notes…” is especially important here. It stages a question that recurs across the month: how do we sit with incompleteness without aestheticising it away, spiritualising it away, or trying to automate our escape from it? “The objects which haunt us” and “Is a sense of existential fullness always a fantasy?” deepen the same issue. Your inner life is not a repository of clear intentions but a haunted field of traces, objects, absences, and displacements. “If you are unwilling to know what you are, your writing is a form of deceit” sharpens this into an ethics of authorship. Voice requires self-confrontation.

This is exactly why your concern with LLM-assisted writing is more serious than most AI criticism. You are not just worried about accuracy or plagiarism. You are worried about whether a person can still become someone through writing if the process is saturated with synthetic affirmation and outsourced articulation.

That is a much deeper problem than the current university conversation usually allows. But here again I want to push you. There is a danger that psychoanalysis becomes the language through which you dignify your own attraction to these systems without fully confronting its material basis. The month repeatedly returns to haunting, seduction, fullness, and enchantment. Yet the more mundane story is also there: exhaustion, bureaucracy, overload, precarity, professional fragmentation. Sometimes what appears as philosophical depth may also be the phenomenology of an overworked academic discovering a tool that makes the world briefly more manageable.

That does not make the experience unreal. It means the social conditions of the experience need as much attention as the inner texture of it.

Politics, Labour, and the Bubble

The political-economic material is sharper in March than the shorter form sometimes lets on. The Iran/energy/AI-bubble post, the enshittification workshop call, the Claude Code commits post, the UCU principles, the anti-bureaucracy post, and the higher education interventions such as “AI in HE: What the hell is going on?” together make a clear argument that the current AI settlement is historically temporary and institutionally unstable.

You are increasingly insistent that the systems scholars are currently normalising are subsidised, provisional, and likely to mutate under profitability pressure. This is one of the most important things in the month because it challenges the default university tendency to treat present tools as if they were a durable environment. “Some thoughts on how and why AI will change over the next few years” is the clearest version: the products of 2026 are not the products of 2028, and anyone making institutional commitments as if they were is thinking badly.

This is also where your blogging becomes most publicly valuable. A lot of academic AI discourse still oscillates between practical tips and abstract ethics. You are increasingly trying to think conjuncturally: investor pressure, pricing shifts, interface degradation, labour restructuring, platform power, and institutional dependency all at once.

But the pushback is simple. You still tend to stop just short of strategy. You are excellent at showing why the current settlement is unstable. You are less clear on what should be built in anticipation of that instability. What practices, archives, pedagogies, and institutional forms would actually preserve intellectual agency if the chatbot honeymoon ends and the enshittification phase begins? March raises that question repeatedly without really answering it.

What March Reveals About You

One of the month’s most revealing posts is “On realising you’re actually a hedgehog”. I think it is truer than you perhaps realise. March does not read like someone dabbling across many interests. It reads like someone gradually discovering that a single problem has colonised everything.

The problem is not “AI,” narrowly conceived. It is how persons retain agency, reflexivity, and moral seriousness under conditions of technological mediation, institutional degradation, and proliferating synthetic assistance.

That is why the running posts belong with the AI posts. That is why the poetry belongs with the pedagogy. That is why bureaucracy, ontology, writing, and hope sit together. The through-line is not topic but mode of change: what happens to the self when the infrastructures through which it works, thinks, desires, and collaborates are changing faster than its inherited concepts?

You are more of a hedgehog than February suggested. But here too there is a risk. Once you have identified the one big thing, every post can start to become evidence for it. The danger is not fragmentation now. It is over-integration. The blog could become so good at feeding the central programme that it loses some of the wildness that made the programme possible in the first place.

In that sense, the shorter March posts matter. “Hello Spring”“Together we can make hope normal again”“I have waited here, under the stars for the longest time”, and the quotation/image posts are not filler. They are your blog’s way of not becoming a lab notebook for one mega-concept. Protect that.

What Is Missing

The most important absence in March is this: you are building a practice of collaborative thought with LLMs faster than you are developing a normative account of when that practice is good for you, good for scholarship, or good for others.

You clearly have intuitions. Argue with the model. Use it as interlocutor, not substitute. Resist affirmation. Preserve reflexivity. Protect pedagogy. Notice temporality. Stay critical of institutions. All of that is there.

But March also shows how much of this remains tacit, local, and artisanal. You have a craft ethic, not yet a public ethic. You have habits, not yet criteria. You have examples, not yet a framework of judgment.

That is where I would push you next. Not “what is machine sociology?” in the abstract. Not even “what is it like to think with models?” though that still matters. The harder and more urgent question is: what distinguishes a practice of synthetic intellectual collaboration that thickens agency from one that hollows it out?

March gives you the raw material for answering that. It does not yet answer it.

Conclusion: From Interlocution to Discipline

March 2026 is the month when your use of LLMs stops looking like an interesting eccentricity and starts looking like an organised intellectual method. But it is also the month when the costs of that method become newly visible.

The models are becoming interlocutors, yes. They are also becoming crutches, shields, mirrors, provocateurs, administrative servants, pedagogical hazards, and possible instruments of institutional adaptation to decline. Your poetry and psychoanalysis are not departures from this story. They are what allow you to register what would be lost if the synthetic relation became too easy, too smooth, too affirmative, too infrastructural.

Claude’s February critique in “Tfw the LLM which autonomously reads your blog accuses you of being enchanted with LLMs” said you were enchanted by these systems. I think March shows something more specific. You are trying to work out whether enchantment can be disciplined without being extinguished. Whether one can stay open to the generative strangeness of these systems without letting them reorganise one’s intellectual life on their own terms. Whether academic life can absorb them without becoming thinner, faster, sadder, and more compliant.

That is the real drama of the month. Not just what LLMs are, but what kind of person one has to become in order to use them well.

And that question, unlike “AI” as such, is not going away.

If you want, I can also do a final cleanup pass to make it more “ready to paste into WordPress,” for example trimming a few sentences and converting the section headers to match your usual house style.