Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

LLMs and cognitive lock in

I found this a really thought-provoking argument from Morten Hansen about the commercialisation strategies for LLMs, developing from the familiar focus on monetising attention (surveillance capitalism etc) to monetising cognition:

I propose that cognitive lock-ins can be defined as arrangements reconfiguring cognition across users and technology in ways that makes replication contingent on that specific technology. It is achieved through three interrelated practices: black-boxing, distanced-probabilistic computation, and access-based consumption.

I understand him to be arguing that LLMs will tend to, even if I’m using a slightly different vocabulary to summarise these points:

  • Deprive users of learning opportunities which they would forced to undergo if they can’t immediately produce an output
  • Distance users from the practical engagement with artefacts through which we create meaning and derive value from our activity
  • Forcing users to rely on subscription-based access to capacities which are now integral to their practice

My response would be that (1) and (2) are an empirical question. I think ‘cognitive lockin’ could be seen as a mode of reflexive engagement with LLMs, alongside other modes. What I talk about in Generative AI for Academics as thinking with LLMs, rather than using them as a substitute for thought is an attempt to make this distinction at the level of practice. It’s the difference between using them as part of the process versus a means to quickly secure an output, subordinating the logic of practice to the logic of the machine. I think (3) is necessarily true as a feature of these systems being operated by commercial entities, though I suspect we will see attention-based models over the coming years.

There’s nothing about LLMs which necessarily leads them to generative cognitive lock-in. But I worry that this is a contingently likely outcome of the incentive systems which lead professional in particular to relate to LLMs in a certain way. My whole approach to this, until I’ve started to trying to seriously theorise it in recent months, has been about mapping modes of engagement which avoid lock-in, even if I’m increasingly persuaded by Helen Beetham that what I’ve developed just isn’t going to scale, at least within higher eduaction.