Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

A philosophical sketch of two models of generative AI: conversational agents and copilots

In Generative AI for Academics I argue there’s an important distinction between conversational agents (which when used properly require thought + reflection) and templated systems (which by definition are intended to avoid thought). I realised when reading Ethan Mollick earlier that this could be more helpfully framed as conversational agents and co-pilots:

You see these sorts of Copilots appearing in many products – very narrow AI systems designed to help with specific tasks. In doing so, they hide the weirder, riskier, and more powerful aspects of Large Language Models. Copilots can be helpful, but they are unlikely to lead to leaps in productivity, or change the way we work, because they are constrained. Power trades off with security.

I would go further and say that copilots are intrinsically vectors of what Bernard Steigler calls (technological) proletarianization in which skills are diffused into sociotechnical systems. They might speed up a process but they do this at the cost of deskilling, whether that’s an immediate or a gradual process. If a skill isn’t organisationally recognised and socially reproduced, it will evaporate over time even if this doesn’t happen immediately.

In contrast conversational agents can be mechanisms of deproletarianization in the sense that their effective use foregrounds individual skill and expertise. To prompt effectively ultimately involves making tacit knowledge explicit, explaining how a process works and then iteratively reflecting on the outcomes of that process.

The problem is not conversational agents but rather the organisational contexts in which they are taken up (prior waves of proletarianization destroy the conditions for technologically driven deproletarianization) and the economic context in which the firms are seeking to commericalise the models, leading to their capacities being engineered out of existence because they’re not a viable product in the context we’re working within.