Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why were deep fakes so much less influential in this year’s elections than I expected?

Earlier on this year I was teaching a course on digital media and information literacy, soon after completing Generative AI for Academics. It was great fun because I was often presenting the students with material I’d only encountered that week, as I explored the implications of GenAI for politics and propaganda. As the course went on, I got increasingly convinced this technology was diffusing at the most dangerous moment imaginable given the number of elections taking place in 2024.

While there are lots of examples of deep fakes in these elections, I can’t think of a single instance in which one seems to have had a major impact. Perhaps it was always going to be this way, with the effects being diffused through a much broader set of tactics rather than a singular action imbued with strategic significance. But I was convinced earlier this year there would be attempts to take such action, perhaps as a nuclear option (given the risk of mutually assured destruction) in the event of impending defeat, whether by a party or an external actor.

The example I gave in the lecture was an imagined video of Biden kicking a kitten. We’d know it was fake, most people who saw it would know it was fake, yet it would undoubtedly spread rapidly through social media as an object of fleeting fascination. There were darker scenarios on my mind as well, which I didn’t feel it was ok to share with the class, but which immediately occurred in an American context where ‘groomer’ had become a popular insult. I wondered if we might see political violence generated as a result of generative propaganda, particularly in the context of far-right movements contesting many of these elections.

Yet nothing. It’s quite reassuring really. But I’d like a convincing answer about why my expectations weren’t realised. Plus I’ve got a great lecture on this which I really enjoy doing and I’m going to have to rewrite it now 🤔

(I backed away from working on this after a few colleagues ever so gently suggested that my deep dive into this was making me slightly paranoid in a number of identifiable ways)