Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

GPT 5.2 is the first model where active positioning is counter-productive

A key part of using LLMs has been positioning in the sense of the role we ask it to play in our interaction with it. Prompt engineering treated this positioning as an entirely explicit process in which you have to define this role and its related elements (e.g. style, process, format) in a comprehensive way. As models have become more advanced this explicit positioning has become decreasingly necessary* because the model is able to infer your intended positioning from the form and content of what the user presents. This created a delicate balance in which a little bit of steering was helpful but active positioning didn’t always make a positive contribution to the process.

I’m finding that GPT 5.2 is the first model where any attempt to actively position makes the model less rather than more useful to me. A caveat is that I’m usually working with large chats, often with supportive documents, so there’s a lot of context. Its still much less fluent in its attunement to Claude but it can clearly discern the problem space I’m working in through the provided context. When I ask it to take on a specific role (e.g. “please respond to me in the role of a psychoanalytical theorist who is helping me test my grasp of these ideas”) the responses become more generic. It seems to lose its attunement because the existing context gets subsumed into the generic patterns associated with the role.

Is anyone else having this experience? If this is a widespread experience it’s extremely significant because it suggests we’re reaching the point where actively exercising agency over the model now begins to make it less useful than it is if you just passively accept the model’s behaviour. As a whole GPT 5.2 feels very strange to me and quite unlike the other models I know well. It’s exceptionally fast and powerful there are some odd features of user-model interaction which I’ve not experienced before.

*Indeed I think it was always overstated but that’s a different blog post.

Fediverse Reactions