Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

đź‘€ Waiting for the Crash – Sketching the Enshittified Future of Large Language Models 

Mark Carrigan and João C. Magalhães 
June 7th, University of Manchester, UK 

In a little over three years since OpenAI launched ChatGPT, large language models (LLMs) have defined the technological imaginary of post-pandemic capitalism. The first hype wave was driven by claims that chatbots would lead to significant changes in personal and working life. While there has been widespread diffusion, with hundreds of millions of people using these systems on a weekly basis globally, the evidence of their lasting impact remains mixed at best. The second hype wave is currently coalescing around the claimed affordances of automated coding systems, such as Anthropic’s Claude Code and OpenAI’s Codex, to work autonomously in ways with the capacity to displace a significant portion of current occupational roles. It is difficult to achieve an analytical balance between recognition of the hype and consideration of the real material impacts and technological possibilities underpinning it. We find ourselves in a febrile environment where real but uneven innovation coexists with occasionally overwhelming torrents of bullshit, in a manner which strains our existing repertoires of analysis and critique. 

What seems clear, however, is that disruption is on the horizon – even if GenAI never materialises the expectations of technologically-driven revolution its emergence has elicited. Either the major AI labs will reach the IPOs they began preparing for in 2025 or the investment bubble around LLMs will burst. Both paths lead to the same destination: the end of the current era of freely accessible, fairly capable chatbots offered at a loss in order to normalise the technology and build a user base. A market correction could produce either a concentration of AI labs, increasing corporate power over the surviving infrastructure, or a systemic economic crisis driven by the byzantine interdependencies linking chip manufacturers, cloud computing providers and AI labs into opaque networks of commercial co-dependence. Even if the labs reach IPO successfully, increased investor scrutiny will intensify pressure towards profitability in ways likely to transform the user experience, with uncertain consequences for these systems’ take-up and firms’ business plans. Cory Doctorow’s analysis of enshittification offers a powerful framework for anticipating these changes: the deterioration of once-valued services as platforms shift from cultivating users to exploiting them. 

In this one day workshop we invite speculative case studies which characterise a plausible (if not necessarily likely) outcome for LLMs along one or more of the following dimensions: 

  • Changes to the operation and governance of language models intended to extract greater value from businesses and/or consumers in order to move towards profitability. 
  • Implications of such changes for the user experience, including how they might amplify existing social trends (such as ‘AI psychosis’) or generate novel ones. 
  • Consequences for non-AI organisations which have rebuilt core functions around the affordances of LLMs and/or pricing models which might undergo substantial transformation. 
  • Developments which have enabled AI labs to resist or mitigate enshittification dynamics, and what these mean for the models and their users. 
  • Differing responses from policymakers and political elites, who might have to either deal with a historical economic crisis or newly-empowered AI labs and firms at a moment of domestic and geopolitical instability 
  • How organized (civil society, social movements etc) and non-organized publics perceive LLMs and their controllers 

The request is to sketch an empirically plausible and conceptually nuanced account of a potential enshittification mechanism. There’s growing agreement that LLMs will degrade but much less clarity about what this will look like in practice. Through this approach we’re hoping to address a range of methodological and conceptual questions which are otherwise underdeveloped in the current debate:  

  • In which ways do social expectations around LLMs development create concrete consequences for these systems, their controllers, and the individuals and organizations who have come to rely on them? 
  • Can we envision and prepare for the potentially drastic outcomes associated with technological change when this change is itself deeply uncertain? What does it mean to assume or conclude that we cannot? 
  • How will the transformation of LLMs shape (and be shaped by) the political power and nature of their controllers? 

Case studies should be submitted in written form so they can be circulated in advance of the workshop and published on a public-facing website for wider deliberation. Please submit a 500-word abstract to mark.carrigan AT manchester.ac.uk and joao.magalhaes AT manchester.ac.uk by April 30th. Travel and accommodation funding will be available to participants without other sources of support.   

Accepted contributors will be asked for a 1,500-word case study by June 1st. These will be distributed to participants in advance of the event, in order to enrich discussion The workshop itself will be structured around 20-minute presentations of each case study followed by group discussion. We expect the workshop will lead to at least one set of publications, with plans to be discussed at the event itself.