Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

A few thoughts about ChatGPT 5

  • It’s consistently the fastest model I’ve used, which is particularly noticeable on iOS.
  • It’s optimised for practical support rather than extended dialogue. I defaulted to it when travelling for a couple of weeks, in a context in which I’d usually have used Claude to ask occasional mundane practical questions.
  • Its inclination to provide summary tables and aggressive use of bullet points makes it very practical in some contexts but would be annoying for my usual more conversational and iterative use of LLMs. I could imagine using GPT5 to help improve a section of a paper but I couldn’t imagine chatting to it about the ideas which are in the paper.
  • I haven’t tried but I struggle to imagine how someone could sustain an affective interaction with GPT5 unless they were really desperate to. If this is an accurate judgement then it suggests something interesting about how OpenAI are positioning themselves.
  • It’s eerily prescient in suggesting follow up questions. It’s success rate in some interactions has been near 100% in suggesting ‘maybe you would like to know more about X?’ or ‘maybe you would like me to do X?’ when X did in fact appeal to me.
  • It solves one of the main problems of agentive AI by suggesting possible tasks it can perform. The notifications are extremely useful. I was chatting to it about literature on AI and psychosis when on a long boring train journey and it proactively setup a monthly search with a notification of updates.
  • As an experiment I asked it to design a companion website for my book, supplying no other details beyond “I want a companion website for Generative AI for Academics”. I honestly would not have been embarrassed using what it produced, with a few edits. I was struck when looking through the model’s chain of thought that it did research about academic companion websites to inform the choices that it made. The fact I ‘designed’ this website with one sentence on my iphone on a tram has been playing on mind since then.
  • However in other interaction the chain of thought seems slightly disorientated and out of kilter. My impression is the model is more prone to going down pointless rabbit holes than Claude Opus 4, looking at patterns in their chain of thought, though obviously we shouldn’t take these to be a straight forward record of what the model is actually doing.
  • I was surprised by how different this felt at the level of model behaviour, even if there wasn’t any great shift I could discern at the level of model capacity. This strengthens my belief that convergence of frontier models and scaling rewards topping out mean the next battle ground will be in post-training, which is where the market will get far more segmented.

I’ve not tried GPT 5 Pro yet so I withhold judgement. My impression is ChatGPT 5 is more useful than Claude as a practical assistant in some contexts but that Claude Opus 4.1 still feels fundamentally smarter in some important but nebulous way. However I think it will be much easier for a user to get started with ChatGPT 5, which not be a good thing.


I shared this with ChatGPT 5 and it contrasted its own speed to Claude’s “meditative pacing” which felt like a subtle burn that belies the workman like character of the model.

Fediverse Reactions