I’m fully on board with this position from Holly Lewis, particularly the part I’ve highlighted. There’s a quite detailed proposal
Beyond vibes, AI realists would be committed to grasping how the technology works, contextualizing it, and examining our intuitions, whether they be to vilify or idealize, to mystify or oversimplify. They would understand that models are not just commodities or platforms, but the unfolding outcome of the systemic logic of embedded material social relations. Large models are created for the purpose of profit maximization and trained on the data that humans have generated, ideologically, as subjects making sense of their lives within capitalist social relations. AI realism would measure the impact of machine learning in terms of months and years, rather than speculate about decades, centuries, and millennia. AI realism would entail intellectual humility, admit its own errors, and forgo wild leaps of logic without denying that the world is growing increasingly strange even as it becomes more predictable.
https://spectrejournal.com/toward-ai-realism/?
I prefer this phrase to Dan McQuillan’s use of ‘AI realism’ to refer to the inability to imagine alternatives to the rapid unfolding of AI infrastructure, as well as the sense of inevitability and passivity which this gives rise to. It’s riffing of Mark Fisher, who was in turn riffing off Žižek, who was in turn riffing off Jameson, whose original quote was actually expressing a slightly less epochal thought: “it seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism; perhaps that is due to some weakness in our imaginations”.
As far as I can tell this is from the introduction to 1994’s Seeds of Time (a collection of lectures) and I see the sense of this as being slightly different to how it tends to be misquoted and interpreted. I’m a huge fan of Fisher in many respects, but I don’t think his conception of capitalist realism as a “pervasive atmosphere” which acts “as a kind of invisible barrier constraining thought and action” was a useful or interesting concept. It was an attempt by Jameson to diagnose a vibe which was constituted by Žižek then Fisher as a vibe. It became a catch phrase, a rallying point, rather than an analytical concept. One which I think sits uneasily with Fisher’s other theoretical commitments, particularly the later acid communism work. If we go back to the previous paragraph of Jameson’s introduction, he says something interesting:
Even after the “end of history,” there has seemed to persist some historical curiosity of a generally systemic rather than a merely anecdotal-kind: not merely to know what will happen next, but as a more general anxiety about the larger fate or destiny of our system or mode of production as such-about which individual experience (of a postmodern kind) tells us that it must be eternal, while our intelligence suggests this feeling to be most improbable indeed, without coming up with plausible scenarios as to its disintegration or replacement
Leaving aside the problem in carrying across cultural-diagnostic concepts from the ‘end of history’ to the 2020s, when history is very much with us, Jameson is pointing to a mismatch between our ‘individual experience’ (what is intuitive and plausible) and our ‘intelligence’: we recognise that what is intuitive seems intellectually implausible, but we’re incapable of doing anything substantive with that disjuncture. There’s a sense of intellectual exhaustion but also a clash which could be productive, if only we can find some way through the impasse. In contrast I think Fisher presents us with a suffocating sense of inertia, without that sense of a spark moving us forward. In fact I would go as far as to suggest you can see his own recognition of that fact in his late work on collective joy.
So what does this mean for AI? It means that we do need to analyse the role of political and economic power in creating a sense of inevitability, as well as the challenges involved in finding ‘plausible scenarios’ of ‘disintegration or replacement’ to use Jameson’s words. But the concepts we use matter for that, with regards to the three tasks which Lewis lays out in their essay:
(1) building a framework for coherent conversations about artificial intelligence, (2) reviewing how the most advanced model architectures roughly function, and (3) detailing trends in machine learning so that we know what we’re swimming in, what’s on the immediate horizon, and what’s as unreachable as the stars
One of the reasons I was so ambivalent about Dan’s book is that the first half seemed to be going in exactly this direction, until it switched register in the second half to become quite different and, in my view, much less interesting. It might be that AI-abolitionists are correct in their response, but there’s an unmoored quality to much of the associated analysis which I struggle with intellectually and politically. Furthermore, I struggle to see how what Lewis calls ‘AI debunkers’ sustain their impulse to explain away the capacities of these systems without collapsing into an anti-empiricism which I think is an intellectual dead end [other than by, as Lewis points out, moving the goalposts to make the problem vanish]. This doesn’t mean credulously accepting the hype, but it doesn’t mean inverting the hype as a matter of principle either.
I’m struggled to at points to understand where I stand in these debates because I’ve found myself with sharply contrasting intuitions to plenty of people I usually agree with. But this essay has helped me understand the position I find myself in where (a) I think it’s important to engage with the sociotechnical reality as it’s rapidly unfolding in a way that connects critical ideas with practical reasoning e.g. within universities (b) this engagement can help mitigate existing problems and slow the growth of new ones, even if there are significant risks of normalisation (c) social and political responses needed to be grounded in AI realism with a historical sensibility e.g. the proliferation of LLMs in education is facilitated by past waves of platformisation etc.
