Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why don’t people find the environmental impact of TikTok as grotesque as that of ChatGPT?

There’s a justified horror many people feel about the environmental impact entailed every time someone shares a prompt with a language model. While I’d be lying if I said I fully share that feeling, I experience something similar when I see pointless uses of image and video models. I’ve largely stopped making AI-generated images for this reason. I almost never have a real need to do it, unlike the other ways I use LLMs. I can understand that what I feel about these uses mirrors what others feel about all, or nearly all, uses of LLMs.

Why, then, do we so rarely see a platform like TikTok discussed in these terms? As a commuter whose brain is wired in a way that public-transport cacophony can be mildly agonising, I’ve spent a lot of time thinking about this. The soundscape I find most difficult is when multiple people are absently flicking through short-form videos without ever watching one in full: each playing a couple of seconds from a whole sequence of clips, creating a disordered, overlapping noise on all sides. At this point, I find it almost impossible to tune out.

One estimate suggests that TikTok’s annual carbon footprint may already exceed that of Greece. Mobile video is intrinsically energy-intensive, and the platform’s design encourages distracted browsing. For most users, it’s also a profoundly passive space: they consume rather than create, often in ways that feel distracted and subjectively unsatisfying. There’s something increasingly grotesque about this: a sense of people hypnotised by an object that simultaneously erodes their capacity to focus. It feels almost like a cruel joke: a self-fragmenting obsession.

What strikes me as odd is that we feel so little moral unease about TikTok on these grounds, while generative AI attracts so much: and, for the most part, rightly so. For the avoidance of doubt, I’m not suggesting either/or, but both/and. Our analysis of the environmental impact of digital activity needs to encompass the whole platform landscape, and it needs to recognise the link between distracted engagement and (environmentally) wasteful use.

Fediverse Reactions