The unintelligent agency of generative AI

This is a useful overview by Luciano Floridi of how “the staggering growth of available data, quantity and speed of calculation, and ever-better algorithms” mean that large language models (LLMs) are able to “do statistically – that is working on the formal structure, and not on the meaning of the texts they deal with – what we do semantically” (2-3). The problem is that “when they do not work, they fail catastrophically, in the etymological sense of vertical and immediate fall in the performance” (4)

They make up texts, answers, or references when they do not know how to answer; make obvious factual mistakes; sometimes fail to make the most trivial logical inferences or struggle with simple mathematics; or have strange linguistic blind spots where they get stuck.

Pg 5

He describes them as ‘fragile’ and ‘brittle’ which usefully captures the disjunct between the eerily capable feeling which Chat-GPT can provoke with one prompt and the immediate descent into bullshit prompted by another. He takes issue with the ‘stochastic parrot’ metaphor arguing this underplays the capacity which LLMs have for shallow synthesis. However which this gets really interesting is his consideration of the distributed agency which LLMs could facilitate:

the emergence of LEGO-like AI systems, working together in a modular and seamless way, with LLMs acting as a AI2AI kind of bridge to make them interoperable, as a sort of “confederated AI”.

Pg 8

He cites the example of ‘interaction’ between ChatGPT and Wolfram Alpha and the emergent opportunities opened up by this. This idea of federated agency between AIs is one I feel will be rattling around in my head a lot over the coming days. It’s more thought provoking than his claim that reinforcement learning for ChatGPT (GPT 3.5) represents an ontological leap in which for the first time “it is successful and can “learn” and improve its behaviour without having to be intelligent in doing so” (9). It is a catchy line to say “We have liberated agency from intelligence” but I can feel the STS scholars seething and in this case I entirely agree with them.

This isn’t a free standing system which improves in a self-directed way; it’s a hugely expensive real-time engineering exercise which is constantly monitored and directed by firms with billions of dollars at stake. These are not ‘artificial agents’ roaming the internet but consumer-facing platforms which facilitate mass engagement with what is undoubtedly a technological innovation, but one reflecting a phase of capital accumulation as much as a technical breakthrough. The agency is distributed through the system, not ‘in’ Chat-GPT as a free-standing entity.

I do like his brief list of some of the philosophical issues posed by generative AI though:

the relationship between form and its syntax, and content and its semantics; the nature of personalisation of content and the fragmentation of shared experience (AI can easily produce a unique, single novel on-demand, for a single reader, for example); the concept of interpretability, and the value of the process and the context of the production of meaning; our uniqueness and originality as producers of meaning and sense, and of new contents; our ability to interact with systems that are increasingly indiscernible from other human beings; our replaceability as readers, interpreters, translators, synthesisers and evaluators of content; power as the control of questions, because, to paraphrase 1984, whoever controls the questions controls the answers, and whoever controls the answers controls reality

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.