Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What is AI criticism for?

I’ve removed this from a monograph because it doesn’t really fit. I’m not sure I stand by this argument from last summer but I thought it was worth sharing. Claude Opus 4.7 has lightly rewritten in for clarity in order to make it more accessible as a blog post.

What is AI criticism for? It’s a question that lurks in the background of much current writing about large language models, even if it rarely gets foregrounded. Alternative LLMs driven by collective purposes rather than the imperatives of the market are in principle possible, even if the scale of capital, compute and expertise required poses an obvious constraint on realisation. The fact we have the LLMs we do is a consequence of the socioeconomic order in which they evolved. There’s a vibrant landscape of alternative possibilities which has been materially foreclosed by the gloomy political economy of the 2010s that produced the platform capitalism now unfolding into its terminal form. The objection here is to a cultural foreclosing that ossifies our ideational capacities even further by collapsing what is sociotechnically possible into a sometimes stylised characterisation of what has been socioeconomically realised.

This doesn’t mean we could realise these possibilities if only we could imagine them. That argument would invert precisely the idealism at work in some AI criticism. An objection I’ve frequently encountered when presenting this approach is that assuming the inevitability of diffusion is a self-fulfilling prophecy: by not considering the possibility that LLMs might fail to rapidly diffuse, critics like me contribute to bringing that diffusion about. In its most moralistic form this is a claim about what the radical sociologist C. Wright Mills called the responsibility of intellectuals — if we have social and cultural power qua intellectuals, we ought to exercise it carefully.

Consider a thought experiment. Imagine there had been an immediate cultural response within higher education in which academics uniformly decided LLMs were illegitimate technologies which should be avoided on ethical grounds. What difference would this have made? The impact within higher education itself would likely have been minor. Student uptake of generative AI was not contingent on the view academics held of language models, even if the extent and nature of student use might have been influenced by encountering a strident stance. The same dynamics would apply in wider society, where the residual cultural authority of academics might influence outcomes but would certainly fail to determine them.

This reflects the fundamentally limited influence of academics on society, even if “we” were to act in unison — which any observer of higher education would surely concede is a vanishingly unlikely outcome. A unified rejection might have catalysed a more influential movement of people refusing the technology, but this would be a fortification of an existing community rather than the creation of a new one. What’s more significant is the modality through which LLMs diffused as consumer-facing services. When software is made available as a service, particularly on a freemium base, it spreads through peer networks (in person and digitally mediated) as well as through advertising and word of mouth.

It’s true that if everyone believed LLMs should be rejected, the technology would not have diffused. But this sociological truism doesn’t get us far in making sense of the role of the AI critic. It’s the equivalent of observing that if no one believed in capitalism we would no longer exist within a capitalist system. It’s trivially accurate because it fails to grapple with the cultural dynamics that shape belief and practice. These do not overturn in a single movement but through slow and messy processes of contestation and struggle which produce counter-reactions and unintended consequences. Money is a useful comparison: it would not have its social force if people didn’t believe in it, but this doesn’t mean we can wish currency away through a deliberate process of delegitimisation. It does, however, explain how financial crises occur when trust breaks down — quietly at first, and then all at once.

The insistence that diffusion would not have happened if only we had been more steadfast in our refusal embodies a prioritisation of culture over structure — or, in a different register, an emphasis on ideas to the exclusion of materiality. It fails to grapple with the material forces driving diffusion, focusing instead on the cultural currents which accelerated and intensified that process. The point is not that the “real” story is the material one, but that any adequate account of diffusion has to incorporate both dimensions. The ideational story of how “AI” has been constructed, promoted and reflected upon is crucial to understanding how LLMs have diffused over the last three years. But it doesn’t follow that a change in the stories would be sufficient to bring about a different process of diffusion. Necessary perhaps; certainly not sufficient.

What do we imagine critique will accomplish?

Consider the now-ubiquitous critiques of social media. Aspects of what Ben Tarnoff and Moira Weigel call tech humanism are now cultural common sense in many contexts. The intellectual nuances of arguments like Shoshana Zuboff’s surveillance capitalism have been flattened into a generalised recognition that social platforms are algorithmically mediated in ways which serve the interests of the firms rather than their users. There’s wide recognition that platforms exercise an influence over attention which is as difficult to resist as it is obvious to recognise. While there’s evidence users are increasingly turning away from social media — particularly if we count the turn towards private messaging as a de facto retreat — this group is still vastly exceeded by users who recognise the negative impact while continuing to post.

The situation Jodi Dean described in her Lacanian media criticism — being “stuck doing the same thing over and over again because this doing produces enjoyment. Post. Post. Post. Click. Click. Click” — has moved from theoretical observation to a routine feature of daily life for the great majority of the population. It’s an analysis many regular users can regurgitate on demand, a routine feature of conversations about the platforms that form such an integral part of their social infrastructure. Indeed, offering such analysis is practically an expression of cultural capital: it positions oneself as a savvy user conscious of manipulation, in contrast to the imagined dupes who make up the platform’s user base.

Even allowing for some rhetorical exaggeration, it’s uncontentious at this stage to claim that mass use of social media platforms can coexist with mass critique of their operations — much as, in an earlier sociotechnical context, concern about the impact of television coexisted with the ubiquity of mass broadcast. This doesn’t invalidate the content of the critique, but it should lead us to reflect earnestly on what role we imagine such critiques play, whether we make them in academic forums or ritualistically enact them in everyday life. As Fred Moten and Stefano Harney put it, the critical academic “questions the university, questions the state, questions art, politics, culture” — but the question of what is accomplished through this critique can remain weirdly elusive.

For Slavoj Žižek this reflects an overemphasis on subjective belief at the expense of objective action: we congratulate ourselves on subjective insight while retaining objective complicity through the routine action that reproduces the system we are effecting a critical distance from. Jana Bacevic has analysed at length how that distance is itself a precarious construct, such that constructing a distance from our conditions more often than not mystifies the nature of our involvement in them. We are left with stock figures who embody and are responsible for the pathologies of the system — the neoliberal manager being a potent example in the critical university studies literature she so insightfully unpicks — while our routine participation in its reproduction becomes elusive as an object of knowledge. Theory/practice inconsistency isn’t unique to critical academics, of course; it reflects a wider question about the relationship between knowledge and action.

The closing window

These issues matter given the time cycle on which novel technologies come to be institutionalised. As Sherry Turkle noted in her classic discussion of the computer, now-familiar technologies were once strange, in the same way that the now-strange LLM will soon be familiar. The problem is made more acute by the move to diffuse LLMs into infrastructure, even if the chatbot form remains alongside them. In Susan Leigh Star’s now-classical formulation, “The normally invisible quality of working infrastructure becomes visible when it breaks down: the server is down, the bridge washes out, there is a power blackout”. To the extent that infrastructure works, it tends to fade into the background of social life as an unacknowledged condition that resists attempts to foreground it as an object of criticism.

It would not yet be true to say LLMs have become part of the infrastructure — a consequence of their relative novelty and the primacy of the chatbot form for most users. But the direction of travel is towards increasing familiarisation and invisibility, depending on how they’re anchored in the lifeworld: the routine stabilised presence which comes from being-with an LLM over time, and the operations in the background of LLMs diffused into browsers, software and platforms.

This makes it imperative to identify and reflect on LLMs as sociotechnical objects in ways that support practical reasoning. What are better or worse ways to act with and on LLMs? How are these actions facilitated or frustrated by the different contexts we inhabit? Which actions are ones we should proceed with individually, and which need to be collective? What is to be done?

Too much AI criticism remains preoccupied by the obscene character of the LLM in a way that not only lacks interest in this link to practical reasoning but actively makes it difficult to draw conclusions at that level. As Turkle observes, “the vehemence of response expresses our stake in maintaining the line between the natural and the artificial, between the human and the mechanical,” such that these discussions are “charged with feelings about what is special about people: their creativity, their sensuality, their pain and pleasure.” When these powerful currents are operating, our discussions easily become as much about propping up tenuous conceptual boundaries as about understanding what is happening and deciding what to do. Pierpaolo Donati’s analysis suggests this threat is inherent to a liberal humanism which conceives of the distinctively human in terms of the individual’s properties and powers. As technology develops, those characteristics are forever encroached upon by increasingly capable machines.

To the extent we create a boundary between our moralised criticism and our practical action in the world, we reinforce the existing tendency for critique to reinforce objective complicity — by offering a self-congratulatory reward for seeing through the mystification while continuing to act the same. The point isn’t that academic AI critics themselves are doing this. It’s that the structure of AI criticism, as it exists and circulates in the social world, lends itself to this form of deresponsibilisation.

A change in the representation of what we’re interacting with — replacing the hyped object of the machine-god-to-be with a grotesque computational vampire — isn’t sufficient to change what people do in the interaction. We have to grapple with the structural and cultural factors underlying the diffusion that has already happened and the diffusion still to come.

What we should be aiming for is a more effective and more reflexive AI criticism. The point of departure is what I take to be an overblown reaction to current chatbots, as opposed to a focus on the system in which they emerged and what that system will produce next. It’s ultimately a difference in where we place the emphasis, and what that means for the mundane ways LLMs are coming to be encountered and embedded in social life.

I’m not asking critics to stop being critical but to reorientate criticism towards helping people navigate these systems on a practical level — including, though not limited to, avoiding them entirely where that feels necessary. AI criticism at the moment struggles to do this, partly because it remains preoccupied by the obscene object of the LLM and the aesthetics of repulsion associated with it.

There’s still a window of time in which we can bring about meaningful change, before LLMs pass through their strangeness and become familiar. There’s still a window before the enshittification dynamics really start to take hold and turn the first wave of frontier models into something far more manipulative and destructive. There’s still a window before a user culture is embedded — users tied into modes of being-with LLMs that have stabilised into a familiar presence in their lifeworld, subsequently hard to unravel.

We should not exhaust this window rehearsing our disgust and scepticism towards objects which hundreds of millions of people experience as useful and supportive presences in their lives. If we do, we miss the window in exactly the same way the moralistic critics of the early wave of social media missed theirs — failing to steer public debate and user culture, let alone the policy and politics downstream of them, before what Richard Seymour calls the social industries embedded themselves so resolutely in the lifeworld that it became as difficult to imagine a complete refusal of them as it is to imagine the end of capitalism.

Fediverse Reactions