Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why didn’t Will MacAskill predict Sam Bankman-Fried’s malfeasance?

I thought this was a great, as well as hilarious, critique from Adam Becker’s More Everything Forever, loc 398:

And MacAskill’s ability to forecast the future—even in the short term—is seriously questionable. Given far more information than most, he still didn’t accurately predict what would happen with Sam, just a few months after What We Owe the Future was published.

Effective Altruism founder Will MacAskill has built a hugely influential philosophical movement on the moral application of our knowledge about the world. In its more recent Longtermist variant this rests on weighing knowledge, even if fuzzy, about the long term trajectory of human civilisation against present concerns.

Yet Will MacAskill was old friends with billionaire crypto-fraudster Sam Bankman-Fried (SBF), who he was repeatedly warned about and who supported his organisations with tens of billions of dollars. If he couldn’t act morally in relation to the substantial weight of evidence about SBF’s malfeasance then why should anyone have confidence in his capacity to act morally in relation to the deeply speculative knowledge of the future he assigns great importance to?

Fediverse Reactions