Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Some sociological thoughts on responsible AI

I’m instinctively nervous about the language of responsible AI because it implies control over the process. When it comes to consumer-facing LLMs, which is the only part of this I claim any expertise over, the possibility of that control is foreclosed on two sides: organisationally and sociotechnically.

In the original phase of their diffusion I was actually relatively relaxed because with early models (say GPT 4, Claude 2 and earlier) it was necessary to exercise intellectual labour in order to get a good enough output. The kernel truth of buried in the otherwise unhelpful language of ‘prompt engineering’ was that thought was necessary in order to get the model to be useful. You had to be clear about what you wanted, explain what you wanted and use that clarity to help refine the output. There was a certain degree of intellectual engagement encountered as a functional necessity. In contrast GPT 5 and Opus 4 onwards are adept at inferring intention from isolated, incoherent or fragmented prompts. It’s now possible to get a good enough output with very little thought. So the product side necessity of exercising control is diminishing rapidly.

However we’re seeing a broader turn towards models which prompt users. GPT Pulse, the suggestion-heavy post training of GPT 5 and Llama’s proactive engagement all herald a near future in which our interaction with LLMs is increasingly driven by the LLM itself. The politics of this are complex, as are the downstream effects, but it does mean we’re entering into a landscape where responsibility for how the LLM is used will increasingly be shared with the LLM itself. Once you consider the commercial imperatives likely to drive this, it seems obvious we’ll enter a situation where it’s increasingly difficult to exercise responsibility over LLMs as an ender user by virtue of design.

In an organisational context existing processes of workload intensification are intersecting with AI hype and post-pandemic organisational strain to create a pressure for workers to increase their own productivity. Either filtered through the lens of quasi-autonomy, such that individuals take up this challenge themselves in a private capacity, or through explicit mandates (like at an increasing number of tech firms) to publicly demonstrate that you’re using LLMs to increase your own productivity. Under these circumstances responsible use gets diffused as a matter of organisational politics.

There clearly needs to be some sense of responsibility for end users when it comes to LLMs. But how do we sustain a notion of responsibility which avoids individualising in the face of the sociotechnical and organisational challenge, recognising the deep and profound difficulties which users are going to face in sustaining a responsible use of the models? For a long time I thought professional user cultures (e.g. a scholarly culture of LLM use) could be an organisational bulwark against these tendencies. But increasingly I realise how deeply naive that view was, largely because I failed to distinguish between agency qua human being and agency qua worker. It’s the latter capacity which will win out, so if productivity expectations increase within organisations, enough people will act in these directions which in turns create network pressure to conform or be left behind.

At least conceptually an adequate notion of responsibility needs to couple these dimensions together (context<->action and infrastructure<->action) in order to posit a potential for action relevant to the circumstances which the user confronts. But that line of argument leads me to the conclusion of that responsible use of LLMs will become ever more costly, with ever greater opportunity costs, over the coming years.