Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

How would workplace learning work if there’s an LLM driven removal of entry level jobs?

Obviously predictions of the rise of the robots by tech elites should be treated with extreme caution. There’s a rhythm to these predictions which is always ignored when we see a fresh round of them. But LLMs have emerged at a point when inflation has increased operating costs for firms around the world, climate change means supply shocks will grow in frequency and intensity, higher interest rates have significantly increased the cost of raising capital, and equity investors are much more demanding of operational models than was once the case.

Under these circumstances the claimed potential of LLMs for automation will inevitably be taken seriously by organisations trying to reduce costs. To be clear I think AI agents will be a disaster even on their terms: the task failure rate is enormous and it will be compounded by stripping away the human expertise which is needed to establish the parameters of their operation. But I don’t think it’s implausible that:

(a) the failure rate will decline and protections against looping hallucinations will be established
(b) there are certain activities where agents would be more reliable, as would individuals operating LLMs in an oversight capacity

For this reason I think the Anthropic CEO arguing that half of entry level jobs could disappear is not inherently implausible, even based on the diffusion of current frontier models which don’t develop at all past their current instantiations. I suspect it’s probably high but that a decline in entry level jobs constitutes a plausible working hypothesis, even if we resist quantifying it.

There are lots of questions posed by this but one which occurred to me yesterday: how would workplace learning work if there’s an LLM driven removal of entry level jobs? These jobs provide a trajectory through sectors in which individuals accumulating skills, experience and expertise through performing roles within organisations. This includes organisational know-how and know-where that we tend to assume develops through learning-by-doing, creating a bedrock of organisational culture upon which attempted automation would implicitly depend. There are lots of strange results which could flow from this, such as a post-LLM workforce defined by an ageing expert class with no capacity to reproduce them.

How would the notion of a ‘career’, already a classed and retreating experience, survive under these conditions? What would it mean for how people imagine their future? What would it mean for the objective possibilities undergirding those imaginings?

(His final message in this video: learn to use my product or you’re screwed. However the fact there’s self-interest underpinning this analysis doesn’t mean it’s completely wrong)