The ontological implications of generative AI

One of the obvious questions raised by the impending ubiquity of large language models concerns the feedback loops likely to ensue. To what extent will future iterations of GPT be trained on outputs from past iterations of GPT? Even if there was an intension to avoid this the diffusion of the consumer-facing technology makes it difficult in practice. The always insightful L. M. Sacasas points to the deeper ontological ramifications of this:

Thinking about large language models like GPT-3 from this perspective, it seems as if they represent a final triumph of mathematical representations over language. If writing amounted to, as Walter Ong once put it, the technologizing of the word (and print its mechanization), then large language models amount to the mathematization of the word. Time will tell what this will mean. I will venture no predictions. I will only say that for Arendt, speech was bound up with natality—the promise of new beginnings, the realm of actions which yield unpredictable consequences. To render speech the outcome of prediction seems to bind it up with mortality instead, with the expected end rather than the unexpected beginning.

https://theconvivialsociety.substack.com/p/language-under-digital-conditions?utm_source=post-email-title&publication_id=6980&post_id=100164970&isFreemail=true&utm_medium=email

There’s a connection here to concerns about the symbolic efficiency of language. Not only do “the same signifiers signify differently in different contexts” but we are continually reminded of this fact as misunderstandings grow in an ever more polarised social space. It feels like we live in different worlds, increasingly incommensurable to the extent those who we disagree with come to appear like lunatics locked into private reveries. I need to revisit Zizek’s account of this in the Ticklish Subject as I was never entirely persuaded by the mechanism here. But I think its an apt description of a social trend which grows more stark with each passing year.

I wonder if generative AI might to some extent mitigate this decline of symbolic efficiency given it ontologises what Rob Horning describes as the ‘social average’; taking a statistical inference based on an ever increasing dataset and concretising it as an output which exists and circulates within the world. One which, as above, feeds into future datasets as an element of that social average. To the extent there’s a reliable convergence between prompts for GPT and its outputs, on which the possibility of expertise in human/system creation logically depends, it contributes to symbolic efficiency through ‘mathematization of the word’. But conversely the ease with which factual seeming representations can be produced, the acceleration of factual claims in a world already drowning in them, risks pulling things in the opposite direction.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.