Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Chomsky on Generative AI: “it’s basically just a way of wasting a lot of the energy in California”

This is an extremely interesting conversation with Noam Chomsky about the limitations of large language models: “if the system doesn’t distinguish the actual world from the non-actual world then it’s not telling us anything”. He distinguishes sharply between a scientific contribution and an engineering contribution, suggesting that it fails to make either. Gary Marcus builds on this discussing how these system “lie indiscriminately” because they lack models of the world, creating the risk they will soon be inundated in the industrial scale production of misinformation. He says “it’s autocomplete on steroids” but it gives the illusion it is more than this, which encourages people to take advice from these systems.

HT Julian Williams for sending this video


This is interesting about the water footprint of AI systems: https://themarkup.org/hello-world/2023/04/15/the-secret-water-footprint-of-ai-technology