Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

social engineers have never had so many options at their disposal

From To Save Everything, Click Here by Evgeny Morozov. For a talk about dystopias I’m doing next month, I’m trying to consider the implications of this technology at the level of social ontology. What does it mean to see sinister possibilities inherent in ‘innovations’ like this? Is there anything we can say in the abstract about how likely these possibilities are to be realised? It strikes me that this is necessary, at least if we are to avoid an empiricist attitude of ‘wait and see’ on the one hand or the systematic suppression of technological change on the other.

Or consider a prototype teapot built by British designer- cum- activist Chris Adams . The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open- source hardware and software, is connected to a site called Can I Turn It On? ( http://www.caniturniton.com ), which, every minute or so, queries Britain’s national grid for aggregate power- usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it’s easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook- compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot’s warnings about high usage by publicizing their irresponsibility among their Facebook friends? Social engineers have never had so many options at their disposal.