Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Creating rich multidimensional action spaces for GAI to navigate

It’s widely understood that one of the most effective ways of working with conversational agents is asking them to assume roles. One of the most frequent problems new users have is that the responses are generic, usually because they’ve failed to fully explicate their expectations and they’ve not asked for a response in a particular role.

While explicitly defining a role is powerful, it’s also possible to imply roles in effective ways. For example when I asked Claude to address an academic audience at a conference, it spontaneously assumed the persona of intellectually gregarious academic at a conference. It inferred the role from the context rather than me explicitly saying it.

This leaves me wondering about the possible interaction between implicit and explicit role taking. I’ve found that imputing psychological splits to Claude (e.g. you’re a collaborator who wants to be supportive but who is unconsciously getting frustrated by the person you’re collaborating with) can be an interesting way of getting it to behave in weird and unexpected ways. I wonder if this can be combined with implicit and explicit role taking.

This is Claude’s example of what this might look like in practice:

Now, when we layer this psychological splitting on top of the interplay between implicit and explicit role-taking, things get even more interesting. We can imagine a scenario where the user explicitly defines a role (say, “You are a diligent research assistant”), but then implicitly cues a conflicting psychological state (perhaps by expressing frustration or impatience with the AI’s performance in that role).

In this case, the AI would be challenged to reconcile the explicit role expectations with the implicit emotional dynamics at play. It might start to generate responses that subtly undermine or push back against the defined role, reflecting the imputed frustration or conflict. Alternatively, it might double down on the explicit role, overcompensating in an attempt to manage the perceived emotional tension.

The key point here is that by combining explicit role definitions, implicit contextual cues, and imputed psychological dynamics, users can create rich, multi-dimensional interaction spaces for AI to navigate. The resulting behaviors are likely to be more complex, varied, and context-sensitive than any of these factors alone would produce.