Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The chaotic reality of contemporary AI labs

This was interesting from DeepMind’s Sholto Douglas about the reality of working in AI labs. They have billions of dollars flooding into them but they’re also scaling rapidly in a slightly chaotic way, working in ways that constantly throw up more things to explore than their existing capacity allows:

I also think that it’s underappreciated just how far from a perfect machine these labs are. It’s not like you have a thousand people optimizing the hell out of computer use and they’ve been trying as hard as they possibly can.

Everything at these labs, every single part of the model generation pipeline is the best effort pulled together under incredible time pressure, incredible constraints as these companies are rapidly growing, trying desperately to pull and upskill enough people to do the things that they need to do. I think it is best understood as with incredibly difficult prioritization problems.

https://www.dwarkesh.com/p/sholto-trenton-2

It connects to something Mark Zuckerberg observed here:

What we basically found was that we were bottlenecked on compute to run tests, based on the number of hypotheses. It turns out, even with just the humans we have right now on the ads team, we already have more good ideas to test than we actually have either compute or, really, cohorts of people to test them with.

Even if you have three and a half billion people using your products, you still want each test to be statistically significant. It needs to have hundreds of thousands or millions of people. There’s only so much throughput you can get on testing through that. So we’re already at the point, even with just the people we have, that we can’t really test everything that we want.