A key part of using LLMs has been positioning in the sense of the role we ask it to play in our interaction with it. Prompt engineering treated this positioning as an entirely explicit process in which you have to define this role and its related elements (e.g. style, process, format) in a comprehensive way. As models have become more advanced this explicit positioning has become decreasingly necessary* because the model is able to infer your intended positioning from the form and content of what the user presents. This created a delicate balance in which a little bit of steering was helpful but active positioning didn’t always make a positive contribution to the process.
I’m finding that GPT 5.2 is the first model where any attempt to actively position makes the model less rather than more useful to me. A caveat is that I’m usually working with large chats, often with supportive documents, so there’s a lot of context. Its still much less fluent in its attunement to Claude but it can clearly discern the problem space I’m working in through the provided context. When I ask it to take on a specific role (e.g. “please respond to me in the role of a psychoanalytical theorist who is helping me test my grasp of these ideas”) the responses become more generic. It seems to lose its attunement because the existing context gets subsumed into the generic patterns associated with the role.
Is anyone else having this experience? If this is a widespread experience it’s extremely significant because it suggests we’re reaching the point where actively exercising agency over the model now begins to make it less useful than it is if you just passively accept the model’s behaviour. As a whole GPT 5.2 feels very strange to me and quite unlike the other models I know well. It’s exceptionally fast and powerful there are some odd features of user-model interaction which I’ve not experienced before.
*Indeed I think it was always overstated but that’s a different blog post.
