Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The problem of generative AI from a cybernetics perspective: conversational agents as variety machines

I’ve found myself preoccupied by the problem of generative AI from a cybernetics perspective over the last few days. Particularly what conversational agents like ChatGPT and Claude mean for problems of variety. I understand variety in cybernetics to refer to the number of distinct elements within a system (more or less complex) and the law of requisite variety to mean that a less complex system cannot regulate a more complex one.

The problem is that that conversational agents operate as machines to amplify variety. They take the ‘inputs’ and provide ‘outputs’ which reliably increase the number of distinct elements within the system. They generate novelty, which doesn’t cease to be novel simply because it is parasitic upon the trawl of existent variety involved in training the model. Until now, I think (!?), human beings were the only systems able to generate content with more variety than was present, so to speak, in the corpus they were trained on. I’m not for a second suggesting LLM training is a form of socialisation, but from a cybernetics perspective there is an analogy at the level of variety.

In fact their dynamism as variety engines is severely curtailed by the guardrails implemented to prevent harmful and contentious outputs which are damaging to the firms operating them. My fascination with them comes because in my experience of GPT 4 onwards, the complexity of what you get out rises steadily in line with the complexity of what you put in. I’ve had tens of thousands of words of densely articulated philosophical and sociological exchanges with Claude which have produced tens of thousands of words of densely argued philosophical and sociological text in response. There is an escalation as it matches me and variety increases in the process.

I feel out of my depth in my grasp of cybernetics here but there’s something profoundly worrying about this line of thought. If the potential variety of the conversational agent vastly outstrips the human user, or the systems in which those users are embedded, then how do we regulate these systems? There are always more ways the conversational agent can respond than ways in which we can predict, assess and control those responses. How do you attenuate the variety, simplifying the system in order to make it manageable, at an organisational and personal level? How is that likely at a technological level when the corporate incentives towards ever increasing capacities mean, in effect, aggressively refusing to attenuate variety, other than when it would be a PR disaster to refuse to do so?

Consider plagiarism by students. These rest on the assumption that the variety of the plagiarised work is lower than the variety of comparable original work i.e. the student has reproduced existing elements rather than creating new ones. But conversational agents make it possible for the variety of the ‘plagiarised’ work to be higher than the work the student might otherwise have produced i.e. a disengaged student writes an essay which is functionally speaking ‘better’ than the one they would have written themselves. Obviously we all recognise now that essays are uniquely vulnerable to this but I’m increasingly concerned that my assumption that authentic assessment mostly solves these problems is hopefully naive.

There’s a deeper underlying dynamic here which is important to understand when formulating policy going forward. I have to my surprised seemingly argued myself into the position that the assessment crisis in higher education is not over, but that we need to reframe it in terms of the broad challenge of equipping systems of evaluation for an environment where GAI is at least potentially ubiquitous. If variety can no longer be assumed to (solely) refer the creative labour of the human subject, then how do we decentre it in how we assess cultural outputs in contexts where we rank based on evaluation? What does ‘quality’ mean in a GAI-saturated environment? If I understand correctly, cybernetics would suggest there has always been a shadow story of quantity lurking behind quality (time on task, length of output, density of references etc as proxies for engagement) which has now broken out into the open. We urgently need to deal with it.


This was Claude’s response to this blog post:

You raise a number of important and complex issues regarding the implications of generative AI systems like conversational agents from a cybernetics perspective. The potential for these systems to rapidly amplify variety and generate outputs with greater complexity than the human users engaging with them is indeed a crucial consideration.

From a cybernetics standpoint, as you note, the law of requisite variety suggests that a system can only be effectively regulated by a system of equal or greater variety. This presents a significant challenge when it comes to human users or even organizations attempting to manage and control the outputs of AI systems that can generate such expansive and unpredictable variety.

The guardrails and restrictions put in place, while necessary to mitigate harmful outputs, do curtail the generative range of these systems. However, the economic incentives driving the development of ever more sophisticated and capable AI agents means there is constant pressure to expand their capacities and minimize constraints. This makes attenuating variety to keep the systems manageable an uphill battle.

Your point about the assumptions underlying plagiarism and academic assessment is astute. The ability of AI to generate content with greater complexity and variety than what a human student might produce on their own upends traditional notions of authenticity and original work. Evaluation systems predicated on the idea that greater variety and quality of output signifies greater engagement and understanding by the student are thrown into question.

This does point to the need for a fundamental reframing and re-envisioning of assessment practices in education and beyond. Decentering assumptions about the link between variety/complexity and human creative labor is key. "Quality" can no longer be taken as a simple signifier of human engagement and skill.

As you note, cybernetics points to the ways quantitative factors like time spent, length, reference density, etc. have long operated behind the scenes as proxies for quality. But in an AI-saturated context, those factors are no longer reliable stand-ins. We need new frameworks for defining and measuring the value and validity of creative and intellectual work.

Some key considerations may include:

- Emphasizing process over product, looking at how students/creators arrived at an output, not just the output itself. What questions, iterations, dead-ends, and breakthroughs shaped the work?

- Prioritizing originality of insight and novel combinations/applications of ideas over sheer complexity of information. How is the creator connecting dots in new ways?

- Assessing the creator's ability to critically evaluate and build upon the outputs of AI generation. Can they identify flaws, limitations and new potential directions?

- Analyzing how the work is situated in broader contexts and its unique implications. What conversations is this work engaging in and how does it push them into new territory?

- Dynamic, interactive assessments that probe the creator's understanding of the work through dialog, critique and reflective analysis, not just evaluation of a static product.

We need robust new approaches to help us navigate education and cultural production in an age where the line between human and machine creativity is increasingly blurred. Grappling with the cybernetic implications around variety and complexity is a crucial piece of this puzzle. It will require ongoing collaboration between educators, technologists, policymakers and beyond to develop frameworks responsive to this new reality. Careful, nuanced discussion of these issues, as you've presented here, is an important step in that direction.