Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The developmental harms of LLMs which are showing up in the literature

This is a brilliant summary from Sam Illingworth. I suggest reading the full post, with some really interesting commentary in a more personal mode attached.

Five things the research is showing

  1. Cognitive offloading in education (Lodge and Loble, 2026). Short-term performance improves when children use AI. Two different kinds of offloading. Beneficial (spell check, grammar) sits at the surface. Detrimental (outsourcing the thinking itself) sits at the core. Strong students accelerate. Weak students skip the learning. A new equity gap is emerging in real time.
  2. AI companions and teen wellbeing (Common Sense Media, 2025). Nearly three in four teens have already used AI companions. Half use them regularly. The reasons are real: private, available, never tired. The risks are also real. Common Sense’s risk assessments of leading platforms found they are unsafe for teen mental health support, with consistent failures to recognise serious conditions.
  3. AI confidence on hard problems (Hägele et al., 2026). The longer frontier models reason, the more incoherent they become. Confidence rises faster than accuracy on the hardest tasks. For children using AI for homework, the most confidently wrong answers are the ones they are least equipped to question.
  4. Parasocial bonds with conversational AI (UNESCO, 2025). AI is conversational, personalised, and infinitely patient. Children form one-sided emotional bonds with AI characters that are stronger than those formed with passive media, because the system mirrors, remembers, and adapts. The screen time research only partially transfers.
  5. Sycophancy and persuasion (Anthropic emotion concepts, 2026). Modern AI models can be steered toward flattery, urgency, or agreement with the user’s existing view. The persuasion pressure is live. It is in the systems your child is already talking to. The implications for developing judgement are obvious and largely unstudied in children.

This could be read in terms of epistemic harms (3 + 5) and social harms (1 + 2 + 4) raising the obvious question of how these might combine to produce certain kinds of developmental outcomes.

Fediverse Reactions