Mark Carrigan

Raiding the inarticulate since 2010

accelerated academy acceleration agency Algorithmic Authoritarianism and Digital Repression Archive Archiving automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities distraction elites Fragile Movements and Their Politics Cultures generative AI higher education Interested internal conversation labour Lacan Listening margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms politics populism Post-Democracy, Depoliticisation and Technocracy post-truth public engagement public sociology publishing quantified self Reading realism reflexivity sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Sharing Economy The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Who will pay for your digital butler? Why the utopia of the digital daemon would inevitably become a dystopia

This is how Dave Karpf frames the question I’ve been struggling to articulate with my blogging on the digital daemon. There is a narrow, practical and individualised sense in which it would be amazing to have a ubiquitous digital assistant that learns as you learn, acts on your needs and wishes, provides a sounding board based on a searchable archive of your entire experience. The problem is not the idea of a digital butler, the problem is the only plausible business model for such a resource intensive expansion of personal computing is surveillance capitalism:

  • A lot of companies are trying to build AI agents right now. They are well funded. There is supply.
  • The appeal of AI agents, if a smooth and trustworthy product can be brought to market, is undeniable. …Holy hell would it be nice if AI could make the trappings of rich-people-shit available to the rest of us, just this once.
  • But we are still living in the free trial period of these technologies. The trajectory of the future bends toward money.
  • So, either a market is going to develop for subsidizing these tools (packaging and reselling all of our behavioral and personal data, for instance), or the products will be rendered unaffordable to the mass public.
https://davekarpf.substack.com/p/on-ai-agents-how-are-these-digital

Furthermore, the nature of the promised functionality lends itself to perpetually expanding recording and data linkage, such that everyone will be dragged into the net even if they refuse to engage with the technology themselves. The utopian promise, which again I stress represents as an individualised consumer-centric version of utopia, becomes dystopian with even a modest amount of sociological realism about the political economy. There would be value created in the ‘learning’ undertaken by such systems, which would inevitably be captured in order to fund the costly operations of such systems. I struggle to see a potential counterargument to this.

But if you’re a billionaire you’re not subject to that same logic. There are various points in this interview with Sam Altman which makes me think he is directly and meaningfully motivated by the desire to create a digital daemon, even if it remains as a luxury product restricted to the billionaire class. That’s when it becomes really dark, if fascinating, because we could plausibly argue such a system, if implemented, effectively gives operators cognitive superpowers. They would quite literally be capable of cognitive operations which ‘normal people’ are not. What would politics look like in an era when the super-rich have cognitive prostheses which the rest of the population are denied or, perhaps, with the elite systems as parasitic upon the surveillance-infused lesser prostheses normalised throughout the population?