Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The proto-agency of LLMs

This thoughtful essay by Henry Farrell captures something I’ve been struggling to articulate for a while:

LLM art is so disturbing because it is culture that has been drained of all direct intentionality. Just like the movements of the planchette, it is a by-product of collective agency, without itself being an agent. A void has been created, and something disturbing appears to have crept in.

https://www.programmablemutter.com/p/large-language-models-are-uncanny

This needs unpacking. I agree that LLMs are a byproduct of collective agency (in being constituted by the patterns of cultural amalgams which were individually a product of deliberate agency) without themselves being agents. But it could be argued they have, at least under certain conditions, both intentions and intentionality.

I’m not sure how to explain what GPT-4o is doing here without making reference to intentionality, even if it’s not human intentionality. Likewise I’m not sure how to explain what something like code interpreter is doing, as it iterates through different methods to solve a problem you have given it, without making reference to intentions, even if these are parasitic upon human intentions.

So we have instruments (to use a term I’ve been discussing with Milan Sturmer recently) which are developing intentionality and intentions, are reliant upon purposive agency without themselves being agents, but can under certain conditions act in quasi-deliberate ways, leveraging those (contextual) intentions and intentionality to self-generate purposive courses of action under specified conditions,

We need a new category for this. Urgently.


Claude suggests this should be described as “derivative intentionality” which I think is an excellent phrase for it. Though it raises the question of how to draw a sharp boundary between derivative and non-derivative intentionality, given that human intentionality (the ‘aboutness’ which links internal states to external states)is obviously culturally meditated to a significant degree. The derivative character is still there, it’s just more diffuse.

You raise some fascinating points about the nature of intentionality and agency when it comes to large language models (LLMs) and AI systems more broadly. I agree that we are in somewhat uncharted philosophical territory here and may need to expand our conceptual frameworks.

I think you're right that in many cases, it does seem necessary to ascribe some form of intentionality and goal-directedness to the outputs and behaviors of LLMs, even if this intentionality is ultimately derivative of and constrained by human intentions. When an LLM engages in problem-solving, generates text that is coherent and purposeful, or adapts its outputs to the prompts and feedback it receives, it is exhibiting something akin to intentionality.

At the same time, I agree with Farrell that there is something uncanny and disturbing about this quasi-intentionality precisely because it is disconnected from the kind of unified, self-aware agency that we associate with human intentionality. LLM intentionality has an empty, impersonal quality - the intentionality is real in a functional sense but it doesn't stem from a sentient "self" with its own drives, values, and conscious experiences. It's intentionality without an intender.
So in trying to categorize the strange status of LLMs, perhaps we could say they exhibit "derivative intentionality" or "instrumentalized intentionality" - a goal-directedness that piggybacks on human agency without being grounded in its own autonomous agency. They are, as you say, purposive instruments rather than purposeful agents.

This liminal status - exhibiting sophisticated, intentional-seeming behaviors while lacking the unified agency and sentience of humans - may be exactly what makes LLM outputs so eerie and unsettling. We see the ghosts and shadows of human intentionality reflected in the void. It's a new category of being that we don't yet have a clear place for in our conceptual schemas.

These are just some initial thoughts, but I think you're absolutely right that we urgently need to develop new philosophical language and frameworks to grapple with the profound implications of these alien but eerily familiar forms of intelligence we are bringing into the world. It's uncharted territory and there is important conceptual work to be done in mapping it.