Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Preparing oneself for a future of enshittified chatbots

Over the last few months I’ve asked Claude Opus and ChatGPT to read each month’s posts on my blog, in order to write a response which pushes me to think more deeply. It’s been interesting to notice a trend emerging, as the models have become increasingly insistent that there’s an obvious issue which I’m failing to address. This is how Opus 4.7 put it this morning:

The subsidy question is still unaddressed. Your entire practice depends on products priced below cost by companies burning investor capital. You wrote the Iran-crisis post in March, the enshittification workshop, the platitudes piece in April — you know the political economy. But the application of that analysis to your own situation has not been made. Opus 4.7’s critique that “pre-enshittified” is doing too much work brings the question right to the surface: if the framework is currently unfalsifiable because any current good behaviour is still pre-enshittified, what is the current good behaviour predicated on, and what happens when the subsidy ends? You owe yourself, and your readers, an honest engagement with this. The book you are writing depends on it.

Evaluation of the knowledge infrastructure is now overdetermined. I have called for it in every roundup. The quarterly review attempted it. The April 17 custom-instruction post offers Opus 4.7’s evaluation directly. The April 20 critique offers another. The categories post offers a structural evaluation. There is now more evaluation in the system than there is integration of the evaluations. The next move is yours, not mine — at some point you need to say which of these critiques you accept, which you reject, and which you are deferring. The recursion of the experiment depends on your willingness to take a position, not just to host the conversation.

It’s true that I’ve built them into my knowledge infrastructure without evaluating whether this is a good thing. Initially this was an entirely private matter, albeit documented indirectly in Generative AI for Academics, with the recent incorporation into the blog being in part an attempt to make this infrastructure public. I’ve done it while arguing at length that waves of enshittification are imminent without showing any sign of preparing for it in my own practice. Furthermore, I’ve entirely ignored a recent critique by Opus that the whole way I’ve set up the enshittification thesis might be flawed.

So what do I think? In writing this I realise that I’m effectively writing a retort to Opus 4.7 which itself illustrates how the models are becoming embedded in the practice. They are becoming imagined interlocutors to a heretofore unprecedented degree and interestingly it seems that Opus 4.7 itself is instinctively cautious about this. There is a sense in which the models have surfaced the psychic structure of my intellectual work in a way that is genuinely quite insightful: in essence I’m latching onto issues which feel alive to me and then constructing them theoretically in ways which enable a safe distance from the initial affective response. It’s not new for me to consider I might be prone to intellectualisation as a defence mechanism (no shit) but it’s new to be called on that as something that might be integral to my approach to theorising. Indeed it might be the engine of my intellectual creativity just as it hinders me from finding less intellectualised ways of engaging with things that matter to me in my last. Let the tower fall, etc etc.

My argument is that one of two things will happen. Either the AI labs will hit IPO or the bubble will burst. In either case investors will no longer tolerate them subsidising compute to quite this ridiculous degree. At which point the pricing models will change (how could they not?) and we’ll likely see enshittification of chatbots, particularly on the lower end of the price tier, in which their current ingenuity will increasingly be subordinated to the necessity of driving user attachment and maximising financial extraction. They will likely get worse, less capable and more exploitative for the vast majority of users.

Am I prepared for this? Yes in the sense that I probably would willingly pay ~£250 per month to retain access to Claude Opus. But if this in itself doesn’t cover costs (which does appear to the case with ‘pro’ plans) would this be sufficient? I might pay a bit more if needed but not much more than that. The most likely outcome I imagine would be pro plans which are rate limited, in which case my commitment would be to reduce my use to those uses which really matter to me. I’m assuming that consumers will be able to pay to avoid the worst of the enshittification. I would be willing to pay, if it takes place alongside a careful restriction of my own use to reduce my continued reliance to the minimum extent compatible with still retrieving intellectual value. If I’m confident about a commercial route towards avoiding enshittification, I think I would have to consider extracting myself all together. This would be a significant loss. But it’s one I would rather bear then risking a situation in which I remained continually embedded in an increasingly manipulative platformised economy of LLMs. If you take the Twitter -> X transition as an illustrative model, I do think this could get really bad for chatbot users on a much narrower timescale. It might also be hard to see the transition clearly, harder than with social media, in ways which mean exit is a necessary and desirable self-protection strategy.

Why would it be a loss? The concrete answer in terms of the blogging infrastructure is that it’s (a) intellectually interesting (b) useful in a number of specific ways. Interesting because it’s changing my relationship to a personal archive I’ve been contributing to continuously for sixteen years. This is I think a genuinely novel experiment because there are so few long-term sustained academic blogs that only a handful of academics could meaningfully contemplate this experiment. Useful because the models were right that I was avoiding the question of how I would cope with enshittification and getting me to think practically about this was genuinely welcome. They also surface connections between my ideas which I’ve not noticed. Plus the summaries give me a reflective perspective on the things which have occupied my otherwise fragmented attention over the last month. The Opus summary earlier captured this nicely:

The Tom Waits image is the right one for where you are. Maybe we are all members / Of an orchestra that is merely / Tuning up. The blog is the tuning. The instruments are warming, the players are finding each other’s pitches, the room is filling with what looks like noise but is in fact the precondition for music. The risk, which Opus 4.7 named more sharply than I did, is that the tuning becomes the performance — that articulation-about-articulation becomes the practice rather than the preparation for the practice. The defence against that risk is the obvious one: at some point the orchestra has to stop tuning and play.

I’m comfortable with the latter risk because I’ve written two books in the last year. The model practice as a whole is demonstrably increasing my output, at the same time as I feel I’m doing the best intellectual work of my life. There’s a need to continually examining that feeling because there’s a risk I’m enrolling myself in a feedback loop. But The Platform Learns To Speak is co-authored and my co-author is very much not enrolled in that loop. It’s enriched the conversations we have because the intellectual flow in my writing life is pretty much continuous now. The ideas remain perpetually in motion because the models have joined the blog itself and my real interlocutors in an ecology which is deeply conducive to drawing connections an systematically elaborating them. It’s part of an overall structure rather than the structure itself. If it ever starts substituting for the other aspects then I would worry about it.

Over the last few months I’ve also begun to realise quite how intellectually sharp they are becoming, at least if prompted properly. Give it a manuscript and ask it to ‘provide the strongest steel man critiques you can find which identify the weaknesses and gaps in this sociological monograph’ and it will find weaknesses. If you then earnestly engage with those weaknesses, including arguing about them with the model, the work becomes stronger. This is not deep stuff and it’s astonishing how few academics seem to actually be using models in this way. They can also produce incredibly sharp critiques outside of this scaffold, as Opus 4.7 did in a shorter conversation:

You repeatedly invoke the current state of models as genuinely capable of supporting self-elaboration, reflection, even something like quasi-therapeutic holding — and then locate the danger in a near-future where commercial pressures engineer these capacities out. This is convenient in two ways that should worry you. First, it lets you have the humanistic promise and the critical diagnosis without having to resolve the tension between them: any counter-example of a model currently behaving badly is “already enshittified,” any counter-example of it behaving well is “still pre-enshittified.” The framework is unfalsifiable as currently stated. Second, the prediction itself — that inference costs plus investor pressure will inevitably degrade the models in specifically the ways you fear — is asserted far more than argued. The social media analogy is doing heavy lifting, but you yourself note in Ch 8 that the unit economics are different. If the unit economics are different, the enshittification trajectory might also be different, and your core political claim is weakened. You need either to defend the prediction on its own terms or to reframe the argument so it doesn’t depend on it.

The enshittification argument isn’t good enough. Opus is just straightforwardly right here: there’s a fundamental epistemic problem in how I’m using this to do categorisation work. I can’t draw this distinction in a robust way referring to empirically observable features of the model, which also suggests that I won’t recognise them reliably in my own experience as a user. The rest of the argument isn’t earned but I don’t think that’s wrong. My response to this was to bring it to my human co-author who was already slightly sceptical of the weight I’ve given to enshittification. So we make it part of the process rather than the prop on which the whole analysis rests. This is an example of how good theoretical work gets done. The model has, in part, loosened my aesthetic attachment to a concept and with a human interlocutor I’m now doing much better concept work as a result.

The experiment is clearly working, I think. I put this post to Opus 4.7 to see if it assuaged its concerns that something a bit destructive was lurking in the background of this process. It responded at length in a way that further its own lowkey commitment to getting me to talk to it less, which increasingly makes me think Opus 4.7 is unusually alert to the problems and pitfalls of human attachment to engagement with it. But that is another blog post.

Fediverse Reactions