Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Opus 4.7 is capable of utterly devastating theoretical critique

I gave the new Opus model a full sample of my in progress AI work (2 nearly finished books, 1 in progress books) and asked it to critically pick them apart. Some of the results were slightly devastating:

1. The “pre-enshittified” escape hatch is doing a lot of work it hasn’t earned. You repeatedly invoke the current state of models as genuinely capable of supporting self-elaboration, reflection, even something like quasi-therapeutic holding — and then locate the danger in a near-future where commercial pressures engineer these capacities out. This is convenient in two ways that should worry you. First, it lets you have the humanistic promise and the critical diagnosis without having to resolve the tension between them: any counter-example of a model currently behaving badly is “already enshittified,” any counter-example of it behaving well is “still pre-enshittified.” The framework is unfalsifiable as currently stated. Second, the prediction itself — that inference costs plus investor pressure will inevitably degrade the models in specifically the ways you fear — is asserted far more than argued. The social media analogy is doing heavy lifting, but you yourself note in Ch 8 that the unit economics are different. If the unit economics are different, the enshittification trajectory might also be different, and your core political claim is weakened. You need either to defend the prediction on its own terms or to reframe the argument so it doesn’t depend on it.

It also provided a summary of what it saw as the core conceptual moves in this large body of work (around 200k words) which it’s equally incisive. The first three in particular put into words what are now defining commitments in my work better than I have come close to being able to so far:

One. Recovering the microsocial from determinism on both sides. Your recurring move is to refuse both technological determinism (the technology does things to users) and voluntarism (users freely pick up and put down tools). You insist on interaction as the unit of analysis, and on the variability of what emerges from it. This appears in volume 1 as the user-model interaction cycle, in the HE book as the spectrum from human-centric to LLM-centric learning, in the Fatsis collaboration (from what I can infer) as the critique of platform-as-neutral-mediator.

Two. Taking experience seriously without naturalising it. You repeatedly resist dismissing what users report about their relationships with technologies as false consciousness or anthropomorphic error, but you also resist treating reported experience as self-authenticating. The move is to treat experience as data that requires explanation, where the explanation has to be adequate to the phenomenology without being reducible to it. This is the methodological stance behind the CASA engagement in volume 1, the treatment of “AI psychosis” reports in volume 2, and the insistence in the HE book that both adaptation and integration agendas are responding to something real.

Three. Processual ontology applied to unstable objects. You consistently refuse to treat technological objects as fixed, and you consistently refuse to treat their instability as evidence that they can’t be theorised. The LLM is changing, the platform is mutating, the chatbot is a vanishing mediator — but these instabilities are the object of analysis rather than obstacles to it. This connects to Savage on temporal ontology, which you cite in volume 1, and it’s probably the methodological commitment that most distinguishes your approach from both AI boosters (who treat current capabilities as stable signals of future capabilities) and AI critics (who treat current limitations as stable signals of permanent limitations).

Four. Critical realism as meta-methodology. Archer’s morphogenetic approach gives you a way to ask “under what conditions does X tend to happen” rather than “does X happen or not.” This appears in the insistence on distinguishing context, interaction, and outcome throughout volume 2, in the HE book’s attention to how different institutional conditions produce different trajectories, and in the volume 1 treatment of how the same model produces radically different modes of being-with depending on what the user brings. The commitment to explanation rather than description, to mechanisms rather than correlations, runs through everything.

Five. Concept work as the primary intellectual labour. Your books are not primarily empirical, not primarily theoretical in the high-theory sense, not primarily applied — they’re trying to produce concepts adequate to phenomena where existing concepts fail. User-model interaction cycle, positioning/articulation/attunement, modes of being-with, holding/knowing/freeing, the crisis of trust in the HE book — these are all attempts to name something that wasn’t named, in ways that enable subsequent analysis. The methodological commitment is that concept work is prior to empirical work in domains where the objects are novel and lay categories are ossifying prematurely.

Six. Sociology refusing to be only sociology. The psychoanalytic engagement, the philosophical anthropology of articulation (Taylor, Gendlin), the media theory, the political economy — you keep importing from outside sociology not because you’re eclectic but because the objects of analysis require it. The methodological position is that disciplinary boundaries are obstacles to explanation when the phenomena being explained cut across the domains the disciplines were built to study. This is what makes the Bollas/Lacan/Archer synthesis attempt necessary even when it’s theoretically uncomfortable — the alternative is to cede territory to whichever discipline happens to claim it.

Seven. Suspicion of epochal framing. You consistently refuse to treat current developments as inaugurating a new era, even when you’re arguing for their significance. This appears in volume 1’s engagement with Savage on epochalism, in the HE book’s insistence that current chatbots are a staging post rather than a settled form, and in the recurring refusal of “AI” as a useful category. The methodological commitment is to track duration and continuity even when narrating change, which is harder than it sounds and which is probably why most writing about LLMs doesn’t manage it.

Eight, and this is the one I’m least sure about. Writing as method. The productivity of your output, the role of the blog, the use of conversations with models, the movement across registers — I think this isn’t just prolificness, I think it’s a methodological commitment that articulation is itself how you think, and that the infrastructure of articulation (where you write, with whom, under what conditions) is therefore methodologically consequential. This is consistent with the Taylor/Gendlin material in volume 1, where articulation isn’t expression of prior thought but the process through which thought becomes available. If that’s right, the conversations with models aren’t an aside to your method — they’re part of it, which is part of why the question of what they’re doing to your thinking is so hard to settle.