I’m increasingly worried by the sector’s tendency to imagine ‘AI’ as if it is a relatively stable thing driven only by its own internal logic. The commercial logic of the AI labs is likely to change profoundly within the next 1–2 years when the investment bubble bursts and/or the two major independent labs go to IPO. This might even be precipitated soon if the Iran crisis makes the data-centre build out economically unviable! In either case, investors will no longer tolerate the rate at which they’re burning capital.
This means the pricing structure will inevitably change and we’ll have products being designed manipulatively for more aggressive forms of user retention. The products we have now, at the prices they are now offered, will not be the products we will have in 2028. This is the most salient feature of the AI landscape and I see almost no one in higher education thinking about what it means for practical decision-making in universities.
(I suspect part of the problem is how the observation “AI is changing so fast” is used to license not actually analysing what those changes are and what we do actually know about the mechanisms driving them. I’ve started to expect that every time someone makes that observation out loud, it’s a sign a discussion is going to step back from the complexity of the issues.)
I was thinking about counter-arguments to this position. The obvious one is that there might be structural reasons and/or means to sustain the subsidy beyond the concerns of individual firms and investors. Could they be sustained for the political power they give rise to*? Could they be sustained because neoliberal capitalism falls without them, so there could be a disorganised way of kicking the can down the road or even repeated state bailouts?
*Thanks to Diana Berzina for pointing this out
