Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Some thoughts on how and why AI will change over the next few years

I’m increasingly worried by the sector’s tendency to imagine ‘AI’ as if it is a relatively stable thing driven only by its own internal logic. The commercial logic of the AI labs is likely to change profoundly within the next 1–2 years when the investment bubble bursts and/or the two major independent labs go to IPO. This might even be precipitated soon if the Iran crisis makes the data-centre build out economically unviable! In either case, investors will no longer tolerate the rate at which they’re burning capital.

This means the pricing structure will inevitably change and we’ll have products being designed manipulatively for more aggressive forms of user retention. The products we have now, at the prices they are now offered, will not be the products we will have in 2028. This is the most salient feature of the AI landscape and I see almost no one in higher education thinking about what it means for practical decision-making in universities.

(I suspect part of the problem is how the observation “AI is changing so fast” is used to license not actually analysing what those changes are and what we do actually know about the mechanisms driving them. I’ve started to expect that every time someone makes that observation out loud, it’s a sign a discussion is going to step back from the complexity of the issues.)

I was thinking about counter-arguments to this position. The obvious one is that there might be structural reasons and/or means to sustain the subsidy beyond the concerns of individual firms and investors. Could they be sustained for the political power they give rise to*? Could they be sustained because neoliberal capitalism falls without them, so there could be a disorganised way of kicking the can down the road or even repeated state bailouts?

*Thanks to Diana Berzina for pointing this out