I find it hard to read this excellent piece by Alfie Brown and not speculate about long term trends… how easy is it to imagine a world in a state of ecological collapse dominated by a few corporate city states fortified against the wastelands at their walls, as well as the millions of migrants fleeing climate catastrophe? He also makes the important point that coverage of these developments too easily frames this in contrast to the presumed democratic landscape ‘here’ and this misses the real significance of these possibilities.

Having long claimed to be apolitical, Jack Ma, the billionaire co-founder and executive chairman of the tech giant Alibaba, was recently revealed to be a member of the ruling Communist party of China (CCP). It’s another in a long list of links between corporate and state apparatus that stretch far beyond the borders of China. Nevertheless, a glimpse into the projects the company is working on in Cloud Town, considered in light of these revelations, should set the alarm bells ringing with fear of a dystopian future of state and corporate control.

Technologies in development at Cloud Town range from AI pedestrian crossing lights that use facial recognition to identify the age of a road-crosser and give them a longer green light if they are old/slow enough, to AI drone cars that can respond to passengers needs.

The greatest feature of the car, explained the proud representative, is that its media panel, linked to the user’s smartphone, reads patterns of movement, food choices and potentially even photos and comments, and then crosses this with millions of data sets to make predictions about what the user might like to eat and how they might like to travel there or have the food travel to them. In short, the new citizen outsources part of their decision-making processes, and maybe even part of their desire, to Alibaba. Our very impulses are mapped and planned in advance. The triangulation between data, predictive technology and desire could be the single most important relationship taking us into the dystopian smart city future.

In recent months, there has been increasing media coverage of the terrifying network of reeducation camps in which the Chinese government has interned hundreds of thousands of the Uighur people. This is only one part of a broader system of social control in which what Timothy Grose calls a ‘virtual custody’ has been constructed through the proliferation of “convenience police stations” at 200 metre intervals, a digital surveillance apparatus and state sanctioned home invasions in which “big brothers and big sisters” conducted 24m home visits, 33m interviews and 8m “ethnic unity” activities in less than two years. What I hadn’t realised was the role that China’s social credit system plays in this:

Yet the vast majority of detainees have not been convicted of any crime. Instead, the Communist party relies on an arbitrary social taxonomy – referred to officially as a “social credit system” – to identify targets. Metrics such as age, faith, religious practices, foreign contacts and experience abroad sort Muslims into three levels: “safe”, “normal” or “unsafe”. Those labelled “unsafe” face an imminent risk of detention.


My understanding is that the social credit sanctions elsewhere in China have been predominately targeted at people in their capacity as consumers. This is not to minimise it because being locked out of credit and purchasing due to being designated ‘dishonest’ is an enormously significant penalty liable to impact upon every facet of life.

But are we seeing the next stage of this process in the oppression of the Uighurs? How will this trial of the social credit system be combined with other trials when the system is rolled out in full? Are we seeing a concrete techno-fascism being constructed before our very eyes? Not the diffuse fears and harms surrounding surveillance capitalism but a totalitarian system of datafication with reeducation camps at their core? While the potential role of private companies in the operation of the social credit system remains uncertain, firms have signed contracts for implementation with local governments. If the system operates effectively in China how long before these and other firms begin to offer related services to governments around the world?

From The Black Box Society, by Frank Pasquale, pg 52:

An unaccountable surveillance state may pose a greater threat to liberty than any particular terror threat. It is not a spectacular dangers, but rather an erosion of a range of freedoms. Most insidiously, the “watchers” have the power to classify those who dare to point this out as “enemies of the state,” themselves in need of scrutiny. That, to me, is the core harm of surveillance: that it freezes into place an inefficient (or worse) politico-economic regime by cowing its critics into silence. Mass surveillance may be doing less to deter destructive acts than it is slowly narrowing of the range of tolerable thought and behaviour.

Where might this lead? What I think of as ‘techno-fascism’ is a speculative answer. How bad could this get if left unchecked? What would life within such a social order look and feel like? Could we imagine a frozen social formation, one able to perpetually recreate itself without change or challenge?

I’m really enjoying Humans Need Not Apply by Jerry Kaplan. Much more so than I expected to in fact. He offers a thoughtful and incisive insider’s critique, in the style of a less verbose Jaron Lanier, concerning the likely trajectory of contemporary digital capitalism. On pg 105 he writes about the “new regime” creeping up on us:

The new regime will creep in silently and unnoticed, as if on cat paws, while you marvel at how the modern world grows ever more convenient, customized to you, and efficient. But behind the scenes, enormous synthetic intellects will be shaving you the thinnest slice of the benefits that you are willing to accept, while reserving the lion’s share for … exactly whom?

The idea of techno-fascism I’ve been playing with all year fits nicely into this account. Techno-fascism is a speculative account of what might result when this nascent digital elite, so thoroughly invested in the ‘new regime’ described by Kaplan, find their power and prestige challenged: specifically, if a significant mass use this architecture of modelling and control for explicitly political, as opposed to commercial, purposes. This is a prospect made more feasible by the regulatory vacuum into which this new regime is ‘creeping in silently and unnoticed’, as well as a broader process in which democratic governance has been hollowed out over recent decades.

love this post by David Banks on Cyborgology:

the future Millennial fascist will need to employ a highly adaptive messaging system enabled by what Zeynep Tufekci has called “computational politics”.

Computational politics allows political leaders to portray themselves very differently depending on whom they are talking to. By using finelytuned algorithms fed by enormous databases of our past decisions, leaders will find a way to promise exactly what matters to you. Hitler may have been limited to a single message of strength but future fascist will be capable of deploying multiple messages of softer and more comforting propaganda. Instead of a single, one-size-fits-all message of brute strength, cupcake fascism will find what makes you feel comforted.

Cupcake fascism augmented by computational politics is not just different wrapping on the same rhetorical structure. It dispenses with the unitary collective all together and asks you to embrace a juiced up but well-worn brand of uniquely American individualism. It can offer the palliatives of a Tumblr featuring hot drinks on cold nights in a safe and clean home. It can serve up promises of new applications for masculine discipline, courage, and strength even as war and industry are increasingly automated. It can make up a hundred more emotionally evocative messages that all end in a promise that theses promises can be real if this single candidate is elected.


An interesting article on Truthout which has some degree of cross over with the ideas I’m developing at the moment. I agree with quite a lot of this in its own terms but see it as a tendency, susceptible to being resisted, emerging against a background which makes that resistance decreasingly likely (depoliticisation and the fragility of social movements):

Techno-fascism is characterized by the ways more aspects of daily life are becoming dependent upon digital technologies that lead to many benefits while at the same time reducing the diversity of cultural ways of knowing and by increasingly subordinating human thought and behaviors to the dictates of machines.

Unlike the racist mythologies of German fascism, the mythic dimensions of techno-fascism are rooted in ancient religious narratives about humans naming and taking control of the environment, and in the abstract thinking of philosophers who laid the conceptual and moral foundations for the modern myth of progress, including the idea that human life is mechanistic in nature and is driven by nature’s law governing natural selection. While the moral foundations of techno-fascism align with the values of market capitalism and the progress-oriented ideology of science that easily slips into scientism, its level of efficiency and totalitarian potential can easily lead to repressive systems that will not tolerate dissent, especially on the part of those challenging how the colonizing nature of techno-fascism promotes consumerism that is destroying the environment and alternative cultural lifestyles such as the cultural commons.

The primary characteristic of all fascist modernizing movements is conformity of thinking and behavior, which is directed and controlled by total surveillance systems that track people’s thoughts, behaviors and relationships. The latest in the emerging techno-fascist arsenal of surveillance technologies is the new facial recognition system now being adopted by local police, which will shortly become part of the FBI’s $1 billion Next Generation Identification program. Photos of people not suspected of criminal activities, as well as those who are, will be instantly available to 18,000 local, state, federal and international law enforcement agencies. The facial recognition technology can identify 16,000 distinct features of a person’s face, and compare them at a rate of more than 1 million faces per second, with other photos held by police agencies.

Three of the most important threats to what remains of our civil liberties include how social unrest resulting from extreme environmental changes can easily lead to redefining what constitutes criminal behavior. A second major problem is that the facial recognition software has a 20 percent failure rate. And the third threat is the one now plaguing local police across the United States: namely, how their biases and misinterpretations lead to police actions that result in the death of innocent people.


I also think the potential causation at work here is very complex. I increasingly see this as a sui generis socio-political tendency, originating out of a very specific set of circumstances and unevenly generalised to the population at large through a diverse range of factors, which might in turn be compounded by a number of distinct though potentially mutually reinforcing tendencies. For instance:

  1. One longer term possibility is the increasingly proactive interventions of defensive elites, against a background of rising instability which they experience as leaving their (increasingly likely to be inherited) privilige at risk.
  2. Another is the latent totalitarianism that can be found within more extreme advocates of copy protection: how far into ‘private’ life will the enforcement of intellectual property rights lead the state to intrude? Cory Doctorow has explored this very provocatively across a range of novels, articles and talks.
  3. What’s the end game for the ‘war on extremism’? Given the tendency for wars on abstract nouns to generate more of precisely what they attempt to oppose, should we expect that the current military lock down in Brussels and the effective suspension of Democracy in France become ever more common occurrences?
  4. What role will depoliticisation 2.0, as I’ve become facetiously prone to thinking of things like TTIP and the Troika’s rampage through southern Europe, play in facilitating what might be a genuinely (techno)-fascistic tendency originating sui generis?
  5. If the present ‘migrant crisis’ in Europe is merely the tip of the iceberg, how will the further fortification of what is already fortress Europe compounds these other trends? What role will the Other, now here rather than there, play in fermenting (digital)  nativism? How will depoliticised governments respond to this tempting electoral inducement?
  6. What about the possibility of actual world war? The geopolitics of the Syrian crisis are so mind-bogglingly complex as to leave systemic risks multiplying with each passing month.

The very depressing prognosis of the article quoted above is that techno-fascism would go unrecognised. I think it overstates the point somewhat but this is largely what I’ve been trying to get at through my account of distracted people.

Digitally mediated learning, which is heavily dependent upon print- and data-based accounts that encode the taken-for-granted cultural assumption (and ideology) of the people who write the programs, reinforces a mindset that responds to short explanations that do not lead to the experience of boredom associated with long-term memory, narratives and written accounts. The ways in which the social media reinforce the importance of the shifting sense of immediacy and instant responses to the anonymous Others ensure that the emergence of a fascist state will go unrecognized. The systems of local control involving a variety of democratic practices and traditions of ecological wisdom must first be lost to memory. Where in the digitally mediated curriculum will students learn about these traditions, when the ideology underlying the digital revolution represents traditions, including local decision-making, as sources of backwardness and as impediments to students creating their own ideas from the wealth of context free data available on the internet?


In order to understand the traditional defenses against totalitarian regimes now being lost, we need to focus more specifically upon the cultural transformations that occur as students spend more of their day in classrooms where computer-mediated learning increasingly displaces face-to-face interaction with teachers and professors who might spark their curiosity to explore beyond the orthodoxies of the day. The many hours of the day texting friends, playing video games and exploring the seemingly endless boundaries of cyberspace also shorten attention spans in ways that undermine long-term memory. Speed and context-free slogans have now replaced depth of understanding and critical judgment.


There’s a great section in this paper by Frank Pasquale, The Algorithmic Self, which relates to my developing and deliberately provocative account of techno-fascism:

Stray too far from the binary of Democratic and Republican politics, and you risk being put on a watchlist. Protest shopping on Black Friday, and some facial recognition database may forever peg you as a rabble-rouser. Take a different route to work on a given day, and maybe that will flag you—“What is she trying to avoid?” A firm like Recorded Future might be able to instantly detect the deviation. Read the wrong blogs or tweets, and an algorithm like the British intelligence services’ Squeaky Dolphin is probably keeping a record. And really, what good is site-monitoring software in the absence of laws that punish, say, the use of jackhammers at construction sites before daybreak? Will the types of protesters whose activism helped make cities livable be able to continue their work as surveillance spreads? Billing sensor networks as integral to the “smart city” is only reassuring if one assumes that a benign intelligence animates its sensing infrastructures.


My suggestion is that these tendencies, as a slippery slope that is reversible but thus far isn’t being reversed, intersect with existing trends towards authoritarianism and depoliticasation. They increase the likelihood that the very scary things incipient within the ‘war on extremism’ and the ‘post-democratic tendency’ will come to pass by contributing to the fragility of social movements and the distraction of the people who comprise them. They also provide a socio-technical infrastructure through which this dystopic potential might come to be realised by defensive elites, effectively unopposed within the polis, who seek to ensure their privilege (more and more of which will be inherited in coming decades) against a backdrop of social upheaval caused by climate change, unprecedented mass migration and structural unemployment driven by automation.

From InfoGlut, by Mark Andrejevic, loc 2580:

As Zizek puts it, the paradox of the decline of symbolic efficiency results in a version of what he calls a resurgent fundamentalism: “what is foreclosed in the symbolic (belief) returns in the Real (of a direct knowledge). A fundamentalist does not believe, he knows directly.” This formulation sums up the attitude of “gut” knowledge: “don’t bother me with your so- called facts, I already understand at a level that they cannot touch.”

Last week Paul Mason posted a provocative Guardian essay suggesting that the end of capitalism has begun. It’s a precursor to his upcoming book PostCapitalism: A Guide To Our Future which is released in a few days time. I’m looking forward to the book, not least of all because it’s an optimistic counterpoint to the gloomy thought experiment I’ve been intermittently working on for months now: what would techno-fascism look like? I finished my first piece of work on this recently, a contribution to the Centre for Social Ontology’s Social Morphogenesis project, making the case that digital capitalism is giving rise to ‘distracted people’ and ‘fragile movements’ while also facilitating surveillance and repression of a degree of efficiency exponentially greater than any security apparatus that has previously existed in human history.

My rather depressing conclusion concerns spiralling obstacles to durable social movements exercising a sustained influence over political and social life, though not necessarily to protest, politicisation or critique. As the project progresses, I want to explore two tendencies towards digitally facilitated suppression: the ‘hard’ strand, the openly authoritarian mechanisms through which digital technology is used repressively and how they might diffuse, as well as the ‘soft’ strand, the increasingly designed informational environment and the cognitive costs involved in escaping it, as well as their implications for collective action.

I situate these in terms of post-democracy and the political economy of the second machine age: crudely, I’m suggesting that the interests of elites in defensive repression, in the face of growing structural underemployment and unemployment driven by automation, creates a risk that ‘soft’ repression (already a problem) comes to be conjoined with ‘hard’ repression, with a post-democratic political climate likely to render popular restraints upon this drift ineffective. This is compounded by a political context in which the war on terrorism is giving way to the war on extremism, normalising repressive measures against those whose ‘ideology’ (let alone their actions) put them outside the political mainstream. Underlying this analysis are some much more specific arguments about ‘distracted people’ and ‘fragile movements’ which I won’t summarise here, as well as an argument I want to develop of where a trend to vertical integration is likely to lead the tech sector and how this might further incline the culture within it in a way susceptible to acquiescing to some rather extreme measures.

It’s a depressing argument. But I’m looking forward to developing it. The project has been on hold since I finished my CSO paper because I need to finish Social Media for Academics. But I’m presenting an initial version of the overall argument at a Futures Workshop in August and then I’ll begin work on a book proposal in September. I’d like to include two chapters of design fiction in the finished book: one envisioning post-capitalism and another envisioning techno-fascism. I don’t believe either outcome is inexorable but I do find my own arguments worryingly convincing (I’m often very critical of my own work but I’m really pleased with the CSO chapter, it went through a slightly  brutal multistage review process and it really shows) at least in terms of currently inoperative social mechanisms that one could easily envision kicking in under future politico-economic circumstances not much worse than our present ones. But if Mason’s book is as provocative as I suspect it will be, I’d like to use it as an optimistic foil, not least of all to preserve the social optimism which I’m concerned that I’m in the process of losing.

This extract from a recent Guardian debate with Mason (HT Phil BC) gives a taste of what the book will be like: https://embed.theguardian.com/embed/video/membership/video/2015/jul/23/paul-mason-is-capitalism-dead-video (unfortunately it won’t embed on wordpress.com)