Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What happens when the AI bubble pops?

We’re currently seeing a global buildout of data centres which is possibly the largest infrastructural investment in human history. According to a Morgan Stanley estimate it’s heading towards $3 trillion cumulatively between now and 2028, with only $1.4 trillion covered by the cashflow of the hyperscalers. They suggest that as much as half of this gap could be plugged by private credit, creating a structural risk parallel to subprime mortgage debt in the run up to the 07/08 crisis. Total US mortgage debt in 07/08 was around $10.5 (~$15 trillion in 2024) trillion so we’re talking about a different scale of allocation but it’s worth being aware of nonetheless. If I understand correctly, the concern is that the more debt financed this becomes the less we can be reassured that the costs of the bubble bursting will be borne by huge tech firms who might in fact be cut down to a better size as a result of their own hubris. This is particularly pronounced given:

  • Complex and opaque interdependencies between AI labs, data centre operators and chip manufacturers
  • The vast financial interests involved in representing the value of these data centres as optimistically as possible, with regards to e.g. the timescale of depreciation
  • The structural significance of AI to an otherwise sclerotic US economy, exacerbated by the capture of the MAGA project by the tech-right
  • The geopolitical significance of AI in the context of unravelling American hegemony and the negotiation of a new world order

If this was entirely funded by Big Tech cashflow it just means the huge cash pile they’d been accumulating would finally have found its way into productive investment. It’s probably better for macroeconomic stability if that finds some material outlet rather than being fed into the investment funds big tech firms now run. If it all fucks up they take a huge write off and they still have a material infrastructure which can be used for something else. There are a few reasons why this appears rational to them:

  • If there’s a small chance the most garish AGI predictions are correct then it’s worth the risk of trying to win a race that would transform the economy. Particularly if competitors are making these investments. If OpenAI builds the machine god then the rewards are potentially infinite for OpenAI. In turn it’s an existential risk for the firms which didn’t build the machine god. Even if you think the odds of this are, say, 1% the scale of the consequences are such that it cannot be written off.
  • The data centre businesses are thriving. Even if there’s some write down involved in converting data centres built for LLM inference and training into other purposes, the material infrastructure can still be used for something useful. There’s a further competitive dynamic in which you don’t want to let escalating demand be captured by one of your competitors.
  • If there’s continued growth in the capabilities of the models (which there is, even if ‘scaling laws’ are breaking down) then it’s reasonable to assume this will lead to an increase in demand. The rationale was clearly stated by Nadella who said they were “building the next-generation model so that then you have a more capable model that then drives more inference demand.” In this sense the economics of training are intrinsically tied up with the economics of future demand. There’s thus far been a virtuous cycle here.

But about the other half (?) of the investment driving the build out? That if I’m starting to understand things accurately is where the irrationality emerges and where the real systemic risks begin to take shape. Perhaps as with anything financial what matters is who bears the loss and what the downstream consequences of this are.