Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Three modes of working with LLMs in higher education

I’m enjoying this series by Anthropic, even if it’s largely a new language for things I’ve already argued in Generative AI for Academics. I like their description of three modes of working with LLMs:

  • Automation: outsourcing a task to the LLM
  • Augmentation: working as a collaborator as a thinking partner
  • Agency: acting quasi-autonomously to pursue a goal

In these terms my stance has been that augmentation offers tremendous intellectual possibilities for academic work but that the political economy of academic labour tends people towards automation and (eventually) agency. At best these can be short-term helpful for individuals but the proportion of automation and (AI) agency in organisations likely correlates with deprofessionalisation, dehumanisation of working life and all sorts of incredibly specific pathologies generated as a byproduct of using LLMs.

I thought this was helpful for thinking about different steps in using LLMs:

  • Delegation: identify what the tasks are and how they should enacted, either individually, in collaboration with an LLM or outsourced to an LLM
  • Description: describing the task precisely to the LLM in a way conducive to getting it to meet your expectations
  • Discernment: evaluating what’s useful from what’s not through reflection on LLM outputs, using domain knowledge
  • Diligence: cultivating a reflective and ethical approach to the whole workflow using LLMs

The problem with systems like Copilot is that they are geared together simplifying/constraining augmentation while pushing people towards automation and agency. They take responsibility for delegation from the individual and instead scaffold it through the affordances embedded in familiar software. It’s a recipe for outsourcing labour and we shouldn’t be encouraging it.

The political economy of these modes are different: description and discernment, as well as augmentation more broadly, presuppose domain expertise and existing practical knowledge. Whereas delegation and automation/agency tend to rendering that domain knowledge redundant, pushing it aside and generally obliterating it as an organisational value.

This is exactly what I’ve meant when I talk about reflexivity in relation to prompting. Perhaps I should drop the (essentially theoretical) language of ‘reflexivity’ and instead talk about ‘problem awareness’ in future training.