Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Who would handover their life to an LLM? What about their organisation?

This is spot on from Gary Marcus about the fundamental idiocy of trusting LLMs, barring some radical shift in the underlying technology, to act as quasi-autonomous agents:

On the competence side, pretty much everything critical of LLMs that I have written in this Substack over the last couple of years becomes relevant, from the unreliability to the linguistic fails that I sometimes call discomprehensions to the hallucinations (anyone remember my alleged pet chicken Henrietta?), and so forth. Do you really want a system that can’t be trusted to draw a room without elephants to automate your finances? (“Each week, transfer anything over my credit card balance plus $2000 to my savings account, and don’t send any payments to my perpetually late contractor until I give the go-ahead.” “OK, I understand. I have sent your contractor $2000”.)

In a system that can write emails, make appointments, sign checks, etc, the consequences of unreliability, discomprehension, and hallucination all escalate. And, to the extent that agents would act directly, humans (who currently often save LLMs from themselves) get left out of the loop. Giving the keys to the castle to an LLM, at our current level of technological readiness, seems to me to be batshit insane.

Oh, and I did mention that the kind of automatic code that agents would presumably write may be buggy and unreliable and perhaps challenging to debug? Or that a recent study argued that the quality of code has gone down in the LLM era?

https://garymarcus.substack.com/p/what-could-possibly-go-wrong-with

Surely the same point applies to automation as well? If you’re trusting a system to act in lieu of a human being then the ontological unreliability of LLMs becomes potentially catastrophic.