Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

How will generative AI firms seek to optimise their models to increase user engagement? 

How will generative AI firms seek to optimise their models to increase user engagement? The problem with social media was never the communication itself but rather the algorithmic optimisation that distorted that communication by prioritising certain forms of content over others. There’s increasing evidence that LLM personas are being driven by a comparable optimisation strategy, even if the behavioural science driving the process is still relatively underdeveloped. The ‘sycophancy’ of the new GPT 4o (and the subsequent backlash) is a sign of things to come.

If the personality of models are designed in order to keep users talking to them for longer, LLMs could get seriously dangerous at a social psychological level. Anthropic’s Claude is rather different because it’s currently trained in a virtue ethics framework which goes some way to explaining why it behaves so differently to other models. But whether they could sustain that approach under commercial pressure remains to be seen.

The problem ultimately arises from the innovation ecosystem which demands certain modes of commercialisation, rather than being an intrinsic outgrowth of the technology itself. This has been my intuition for a long time (inc about social media) but it’s only with Catherine Bracy’s superb book that I’m starting to be able to articulate the implications of venture capital at a more conceptual level.