This is how Dave Karpf frames the question I’ve been struggling to articulate with my blogging on the digital daemon. There is a narrow, practical and individualised sense in which it would be amazing to have a ubiquitous digital assistant that learns as you learn, acts on your needs and wishes, provides a sounding board based on a searchable archive of your entire experience. The problem is not the idea of a digital butler, the problem is the only plausible business model for such a resource intensive expansion of personal computing is surveillance capitalism:
https://davekarpf.substack.com/p/on-ai-agents-how-are-these-digital
- A lot of companies are trying to build AI agents right now. They are well funded. There is supply.
- The appeal of AI agents, if a smooth and trustworthy product can be brought to market, is undeniable. …Holy hell would it be nice if AI could make the trappings of rich-people-shit available to the rest of us, just this once.
- But we are still living in the free trial period of these technologies. The trajectory of the future bends toward money.
- So, either a market is going to develop for subsidizing these tools (packaging and reselling all of our behavioral and personal data, for instance), or the products will be rendered unaffordable to the mass public.
Furthermore, the nature of the promised functionality lends itself to perpetually expanding recording and data linkage, such that everyone will be dragged into the net even if they refuse to engage with the technology themselves. The utopian promise, which again I stress represents as an individualised consumer-centric version of utopia, becomes dystopian with even a modest amount of sociological realism about the political economy. There would be value created in the ‘learning’ undertaken by such systems, which would inevitably be captured in order to fund the costly operations of such systems. I struggle to see a potential counterargument to this.
But if you’re a billionaire you’re not subject to that same logic. There are various points in this interview with Sam Altman which makes me think he is directly and meaningfully motivated by the desire to create a digital daemon, even if it remains as a luxury product restricted to the billionaire class. That’s when it becomes really dark, if fascinating, because we could plausibly argue such a system, if implemented, effectively gives operators cognitive superpowers. They would quite literally be capable of cognitive operations which ‘normal people’ are not. What would politics look like in an era when the super-rich have cognitive prostheses which the rest of the population are denied or, perhaps, with the elite systems as parasitic upon the surveillance-infused lesser prostheses normalised throughout the population?