Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The cognitive load of conversational agents is a feature not a bug

Interesting quote from the US DoD’s chief AI officer in this FT piece:

For some users, this inbuilt unreliability is a deal-breaker. Craig Martell, the US Department of Defense’s chief AI officer, said last week he would demand a “five 9s” [99.999 per cent] level of accuracy before deploying an AI system. “I cannot have a hallucination that says ‘Oh yeah, put widget A connected to widget B’ — and it blows up,” he said. Many generative AI systems placed too high a “cognitive load” on the user to determine what was right or wrong, he added.

While I understand he’s talking narrowly about hallucination here, I’d suggest this ‘cognitive load’ should be seen as a feature not a bug. Conversational agents reward complexity which requires a reflective engagement with them; the fact it can go wrong without this engagement isn’t a problem with the agent, it’s a problem with the user.