This is an accusation which Jaron Lanier makes strongly on pg 134 of his recent Ten Reasons To Delete Your Social Media Accounts Right Now. Coming from someone who was less of an insider, it might seem like a rather shrill and slightly paranoid reading of the culture of digital elites. However I find it hard not to take Lanier seriously, even if what he says here would benefit from being unpacked further:
One of the reasons that BUMMER works the way it does is that the engineers working at BUMMER companies often believe that their top priority among top priorities isn’t serving present-day humans, but building the artificial intelligences that will inherit the earth. The constant surveillance and testing of behavior modification in multitudes of humans is supposedly gathering data that will evolve into the intelligence of future AIs. (One might wonder if AI engineers believe that manipulating people will be AI’s purpose.) The big tech companies are publicly committed to an extravagant “AI race” that they often prioritize above all else. It’s completely normal to hear an executive from one of the biggest companies in the world talk about the possibility of a coming singularity, when the AIs will take over. The singularity is the BUMMER religion’s answer to the evangelical Christian Rapture. The weirdness is normalized when BUMMER customers, who are often techies themselves, accept AI as a coherent and legitimate concept, and make spending decisions based on it.
It strike me that there are two things going on here which we ought to distinguish, at least on an analytical level. Firstly, there are emerging forms of techno-religion within Silicon Valley concerning the significance of artificial intelligence for the future of humanity. If we don’t take these seriously as religious forms, we risk missing the causal influence they may exercise over the organisational life of technology forms. But we need to avoid taking them too seriously and imputing a singular character to what appear in reality to be multiple, fragmented and partial frameworks of belief. Secondly, as Evgeny Morozov has powerfully argued in the last year, the AI arms race at a corporate level needs to be understood in terms of overarching systemic trends within Silicon Valley. The advertising business has a shelf life, overheads on machine learning are much lower and these firms intend to use the data they have accumulated for advertising purposes in order to pivot into providing the infrastructure for machine learning to be woven into every aspect of the social fabric. These are two distinct trends, even if they may be reinforcing through the commitment they engender towards a corporate strategy. However where it becomes interesting is if the underlying methodological assumptions begin to be contested on a political level. If a vision of the singularity currently engenders commitment to the job and provides a lens through which organisational decisions are inflected, what happens if external groups seek to hold up such centrality?