I was sceptical of this suggestion by L.M. Sacasas when I first read it but my mind keeps circling back to it as I think about the macrosociology of AI. It’s particularly interesting to think of the mythology of existential risk in these terms, with AGI (losing control of the technology) stand in for precisely this expiration of modernity:
AI is apocalyptic in exactly one narrow sense: it is not causing, but rather revealing the end of a world. We get our word apocalypse from a Greek word meaning “to reveal, to disclose, or to uncover.” What I am suggesting is that AI, as it is being developed, deployed, and hyped (and criti-hyped), forces us to reckon with the fact that modernity is expiring, and it is expiring precisely to the degree that it no longer serves the interest of and is at various points, particularly in its techno-economic dimensions, openly hostile to the human person. As a second nature, the culture of technological modernity, while undoubtedly improving the lot of humanity in important ways, has become, in other respects, inhospitable to our species. AI can thus be read as a last ditch effort to shore up the old decrepit structures and to double down on the promise of scale, efficiency, rationality, control, and prediction. It can also be read as an effort to extend the logic of late modernity to a point of absurdity. So where we see a proposed or actual application of AI, we might do well to ask how it relates to the end of the world we have called modern.
https://theconvivialsociety.substack.com/p/apocalyptic-ai
It leaves me with a vivid image of tech billionaires holed up in a mountain fortress in New Zealand, possibly with weaponised Furbies standing in for armed guards after Douglas Rushkoff led them to lose trust in their loyalty in a post-collapse situation.