Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What kind of AI bubble are we in?

This is very helpful by Dave Karpf about three prevailing narratives concerning the dot-com crash which are lurking in the background of current debates:

So those are our three potential narratives: (1) a startup bubble, (2) unrealistic capital expenditures, and (3) way-too-fancy financial chicanery. All three of these phenomena happened simultaneously, but the lessons we take from the dotcom crash vary depending on which story we emphasize.

As he points out Sam Altman means (1) such that his company will be the equivalent of Amazon arising from the crash. Whereas I think the most incisive critique of the current bubble concerns the parallels between the financial infrastructure of AI (e.g. circular financing, a push to ‘financial innovation’, deliberately opaque treatment of business fundamentals) and financialisation from Enron through to the great recession. I’ve not as convinced that the infrastructural build out is as self-evidently irrational as Karpf suggests here:

For the past year, the AI data center construction boom has given off strong Global Crossing vibes. Microsoft, Meta, Google, OpenAI, and X.ai are all spending billions to build massive data centers. I’m just a simple political scientist who reads old tech magazines, but I cannot fathom how the costs of data center construction are ever supposed to be recouped from a mass user base that pays between $0 and $20/month for the products. (Read Ed Zitron for much more on how little sense these numbers make).

Effectively they’re deploying their savings glut on a bet about who controls the future of planetary scale computation. The calculation is that this will be used for something and it’s a long term capital investment as much as a short-term bet on generative AI. I agree with him that it’s like the telecoms story with the exception that I don’t think this will bankrupt any of the major cloud computing players. OpenAI and Anthropic on the other hand….? This on the other hand seems like a deeply precarious basis for sustained growth:

But with the latest wave of multibillion- and trillion-dollar dealmaking among the largest AI players, the vibes are turning decidedly Enron-like. Nvidia announced it is investing $100 billion in OpenAI, which OpenAI will then use to purchase Nvidia products. OpenAI announces a deal to buy $78 billion in chips from AMD, and is awarded 10% of the company in the deal, effectively offsetting the purchase.

As he puts it in the closing paragraph: “But I’ll say this: the AI bubble isn’t predominantly giving off Pets.com or Global Crossing vibes anymore. It’s giving Enron vibes” 👌

This is something which the FT journalist Rana Foroohar drew attention to years ago in Don’t Be Evil. This is how Readwise summarises my highlights and notes oon her book:

Her systemic-risk argument:

  • Big Tech has become “too big to fail” by amassing giant, opaque bond portfolios and acting like unregulated banks (issuing cheap debt, buying higher-yield corporate bonds, anchoring deals). If those assets are downgraded or dumped, markets could be toppled.

  • Their dominance across critical “infrastructure” (ads, cloud, payments/logistics, data) makes them systemically important beyond finance, with failures or abuses spilling into politics, media, healthcare, and national security.

  • The tech–finance convergence (using privileged data to price and push credit/insurance) amplifies information asymmetries and offloads tail risks to the public, with the state likely as insurer of last resort.
So it’s not just big balance sheets; it’s balance sheets + market centrality + regulatory gaps that create new systemic risk.

    Fediverse Reactions