One of the curious things about the term ‘AI’ is how nebulous it is. Kate Crawford explains this well on pg 8 of Atlas of AI. The point is not so much there’s a real scientific terminology obscured by vacuous public misunderstandings but rather that ‘artificial intelligence’ acts as a capacious discursive placeholder through which interactions between science and society are mediated:
It’s worth noting that the term “artificial intelligence” can create discomfort in the computer science community. The phrase has moved in and out of fashion over the decades and is used more in marketing than by researchers. “Machine learning” is more commonly used in the technical literature. Yet the nomenclature of AI is often embraced during funding application season, when venture capitalists come bearing checkbooks, or when researchers are seeking press attention for a new scientific result. As a result, the term is both used and rejected in ways that keep its meaning in flux.
This goes some way to explaining why threats are imputed to ‘artificial intelligence’ rather than ‘machine learning’. How menacing can it be that machines are learning stuff in a narrowly statistical manner? In contrast ‘artificial intelligence’ suggests the potential for autonomous agents acting in the world, raising the prospect they might act against the interest of human agents. There’s a similar equivocation built into the discursive structures as plagues David Chalmers in his discussion of ‘virtual reality’, as I summarised them here:
Part of my scepticism about Reality+ is how it naturalises the techno-hype of 2022 concerning ‘the metaverse’, by equivocating between empirically specific technological developments and putative future outgrowths with characteristics which would satisfy the conditions of pre-existing thought experiments. ‘In principle’, ‘we will eventually’ (etc) illustrates how even the equivocations cut across the empirical and logical without paying attention to the difference between them. By bringing the cultural authority of the analytic techno-philosopher to the subject, he’s further inflating a bubble of expectation which conflates the state of the technology now with the world changing implications it may have in the future.
https://markcarrigan.net/2023/02/19/what-does-a-sociological-rather-than-philosophical-approach-to-thought-experiments-look-like-the-example-of-the-mat/
John Herman captures this in an extremely entertaining way in a recent piece about the writer’s strike (HT Rob Horning). He draws attention in particular to how AI can be mobilised as a threat to people:
It’s AI as a reckoning, a punisher, a revealer of frauds. It’s AI as a future vindicator of their hunches about how the world works, and as an extension of their politics. It’s AI as a cleansing force that humbles your enemies and proves you right — AI as economic rapture. It’s AI as your army-in-waiting just over the horizon, your punishing angel, or maybe just as the thing that’s going to embarrass the people who annoy you online. A lot of sunnier AI speculation is clearly wish fulfillment, and so is this. AI is my big, strong friend, and he’s going to beat you up.
This is exactly why I get increasingly frustrated with critical social scientists who point to ‘generative AI’ being a hype cycle in a deflationary mode. No one denies this. But not only does a hype cycle not preclude the existing of underlying socio-technical capabilities which license hyped claims (in a way that say web 3.0 just didn’t), this framing should draw out attention to the hype as a material factor acting within the world; as opposed to attributions of hype which lead us to pay less attention to the cultural construction of a new technology or to regard the hype as a narrowly discursive matter. There’s a cultural political economy at work here which we ignore at our peril.