My notes on Mantello, P. (2016). The machine that ate bad people: The ontopolitics of the precrime assemblage. Big Data & Society. https://doi.org/10.1177/2053951716682538

Since 9/11 the politics of prediction and risk have created an alliance between security agencies, technology firms and other commercial actors which seeks to create a precrime assemblage: the first generation sought to identify threats through data mining (“search habits, financial transactions, credit card purchases, travel history, and email communications”) but the next generation are “becoming intelligent assemblages capable of integrating data from a multitude of nodes in order to foresee and preempt harmful futures” (pg 2). These advances are being facilitated through cloud computing, machine learning and limitless storage.

The beta versions of these assemblages are being tested in real world situations, rendering it urgent for us to understand their implications. The first is what it means for criminal justice as a whole when the focus is on the anticipation of crime rather than dealing with its occurrence after the fact. The second is the expansion of surveillance into everyday life driven by the public-private alliances which are driving the agenda. The scope of surveillance is increasing but so too is to civic participation in it, driven by gamified mechanisms which “encourages citizens to do the securitization footwork of the state by offering them the opportunity to participate in do-it-yourself, reward-centered, pro-active, networked and, at times, and gamified versions of automated governance” (pg 2).

Peter Mantello argues that the allure of technological innovation is legitimating these developments, promising greater impartiality and efficiency, while the reality of their operation is extending juridicial reach in order to identify non immediate threats to the established order. The pre-crime assemblage will function “to preserve the domains of its masters, who will control immense existential and predictive data that will allow them to shape public perceptions, mold social behavior, and quell possible opposition, thereby ensuring the exception incontrovertible and infinite life” (pg 2).

He uses Massumi’s conception of ontopower to theorise this process, “a mode of power driven by an operative logic of preemption is spreading throughout the various structures, systems, and processes of modern life” (pg 3). Pre-emption itself is long standing but the preoccupation with speculative feelings of non imminent threats was, he argues, born out of the reaction to 9/11. If I understand correctly, the point is that risks are increasingly pre-empted rather than managed, with risk management becoming an anticipatory lens through actors and organisations proactively prepare for imagined futures.

Exceptionalism becomes legitimate under these circumstances, as anticipated threats are used to justify actions which would have otherwise been regarded as illegitimate. A mechanism like the “public safety orders” enacted by the New South Wale police expand the principle of anti-terror policing to civic law enforcement: “they shift the balance further away from the principles of due process where people are innocent until proven guilty and more toward a new era where crimes are committed before they happen, citizens are disappeared without recourse to defense, and where guilt and imprisonment are based on suspicion, rumor, association, or simply left to the intuitive ‘gut feeling’ of police officers” (pg 4). This goes hand-in-hand with an affirmation of the unpredictability of the future. Randomness and uncertainty mean that crimes cannot be avoided but this is why anticipatory work is seen as so important to minimise the threats on the horizon.

This anticipatory work tends to diffuse responsibility into an apparatus of knowledge production, identifying networks of connections or regional hot spots which become the locus of an intervention. A whole range of assets are deployed in the preparation of these interventions, as described on pg 5 in the case of Hitachi’s Public Safety Visualization Suite 4.5:

This includes mining data from an array of various nodes such as remote video systems (hotels/city streets/commercial and private properties/transporta- tion lines), gunshot sensors that alert CCTV cameras, vehicle license plate recognition systems, wireless com- munications, Twitter and other social media, mobile surveillance systems as well as useful data from smart parking meters, public transit systems, and online newspapers and weather forecasts.

Data visualisation plays a crucial role in this by “compressing vast amounts of invisible data into visible signifiers” (pg 5). However the uncertainty, ambiguity and construction which characterises the data itself is lost in the apparent self-evidence of the ensuing representations. The navigability, scalability, and tactility of the interface then mediates interaction with this experienced reality. The performative power falls away, as diverting police resources to ‘hotspots’ only to discover ‘more crime’ there (either comparable to what could be found elsewhere or encouraged by the aggravating factor of heavy handed police) comes to function as a legitimation of the apparatus itself. The approach also compounds existing inequalities through its reliance on historical apparatus about patterns of arrest in order to predict future offending.

What I found fascinating was the slippage in the software. An example on pg 6 concerns ‘at risk’ lists, intended to be the basis for social service interventions prior to any policing action, instead being used as target lists for people who were assumed to be likely offenders. This on the ground slippage highlights the importance of understanding the organisational context within which new tools are deployed, as a means to understand how their original intentions may mutate in the context of application.

The terrifying turn underway is from the deployment of past data to the harvesting of present data in real time. As Mantello puts it, this involves “the real-time extraction of personal data from an individual’s daily life—monitoring their patterns, routines, habits, emotional tendencies, preferences, idiosyncrasies, and geo- spatial coordinates” (pg 7). Enthusiasts claim that the broader the data that is harvested, the easier it will be to identify ‘criminal signatures’ at ever earlier points in time. This converges with what Zuboff has called surveillance capitalism in which behavioural data is leveraged to persuade rather than simply to predict. How might this modus operandi be enacted as part of the pre-crime assemblage? There is a truly dystopian horizon to such a project, described on pg 7:

Yet there is also the distinct dystopian possibility, in its never- ending ontopolitical pursuit to colonize and regulate all aspects of social life, that it may suppress dissent and discourage nonconformist thought or behavior. Already we are seeing such practices occur today with the increasing trends of self-censorship in social media due to fear of state surveillance and authoritarian reprisal

The gamified form this takes can be seen in Sesame Credit, produced in collaboration with Alibaba, as part of the early stages of China’s opt in social credit system, with rewards on offer for those who perform in ways that meet expectations. But as this becomes mandatory in 2020, we can expect this to go hand-in-hand with the proactive avoidance of people deemed to have poor social credit and potential sites where negative social credit behaviours may thrive. The author also considers the example of opt-in blackboxes in cars, where rewards on offer for those who agree to such monitoring but which eventually may be rolled out for everyone as part of a transformation of insurance. The City of Boston security app, Citizen Connect, offers ‘street cred’ recognition points for repeated contributions: “users who actively report on suspicious persons, ongoing crime, random acts of violence, or municipal infrastructure hazards get promoted to special ‘‘patrols’’ where they earn special badges of civic distinction” (pg 9).

I’ve been reflecting on a dark but plausible prediction by Edwards Snowden in his forward to The Assination Complex by Jeremy Scahill and the team from the intercept. On loc 195 he argues that the technological barriers to ubiquitous drone surveillance are now minimal:

Inevitably that conceptual subversion finds its way home, along with the technology that enables officials to promote comfortable illusions about surgical killing and nonintrusive surveillance. Take, for instance, the Holy Grail of drone persistence, a capability that the United States has been pursuing forever. The goal is to deploy solar-powered drones that can loiter in the air for weeks without coming down. Once you can do that, and you put any typical signals-collection device on the bottom of it to monitor, unblinkingly, the emanations of, for example, the different network addresses of every laptop, smartphone, and iPod, you know not just where a particular device is in what city, but you know what apartment each device lives in, where it goes at any particular time, and by what route. Once you know the devices, you know their owners. When you start doing this over several cities, you’re tracking the movements not just of individuals but of whole populations. By preying on the modern necessity to stay connected, governments can reduce our dignity to something like that of tagged animals, the primary difference being that we paid for the tags and they’re in our pockets. It sounds like fantasist paranoia, but on the technical level it’s so trivial to implement that I cannot imagine a future in which it won’t be attempted. It will be limited to the war zones at first, in accordance with our customs, but surveillance technology has a tendency to follow us home.

The basic claim here is one we should take seriously: if technically feasible means of surveillance aren’t challenged, we face an inevitable slide towards their introduction. What currently exists within states of exception (a category that can range from war zones to mega-events) risks expanding into the everyday. How the introduction of techniques to areas outside normality is responded to will prove crucial for determining the contours of the new normal.

I just came across this remarkable estimate in an Economist feature on surveillance. I knew digitalisation made surveillance cheaper but I didn’t realise quite how much cheaper. How much of the creeping authoritarianism which characterises the contemporary national security apparatus in the UK and US is driven by a familiar impulse towards efficiency?

The agencies not only do more, they also spend less. According to Mr Schneier, to deploy agents on a tail costs $175,000 a month because it takes a lot of manpower. To put a GPS receiver in someone’s car takes $150 a month. But to tag a target’s mobile phone, with the help of a phone company, costs only $30 a month. And whereas paper records soon become unmanageable, electronic storage is so cheap that the agencies can afford to hang on to a lot of data that may one day come in useful.

http://www.economist.com/news/special-report/21709773-who-benefiting-more-cyberisation-intelligence-spooks-or-their

In reality, it is of course anything but, instead heralding a potentially open ended project to capture the world and achieve the utopia of total social legibility. An ambition which always makes me think of this short story:

The story deals with the development of universe-scale computers called Multivacs and their relationships with humanity through the courses of seven historic settings, beginning in 2061. In each of the first six scenes a different character presents the computer with the same question; namely, how the threat to human existence posed by the heat death of the universe can be averted. The question was: “How can the net amount of entropy of the universe be massively decreased?” This is equivalent to asking: “Can the workings of the second law of thermodynamics (used in the story as the increase of the entropy of the universe) be reversed?” Multivac’s only response after much “thinking” is: “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”

The story jumps forward in time into later eras of human and scientific development. In each of these eras someone decides to ask the ultimate “last question” regarding the reversal and decrease of entropy. Each time, in each new era, Multivac’s descendant is asked this question, and finds itself unable to solve the problem. Each time all it can answer is an (increasingly sophisticated, linguistically): “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

In the last scene, the god-like descendant of humanity (the unified mental process of over a trillion, trillion, trillion humans that have spread throughout the universe) watches the stars flicker out, one by one, as matter and energy ends, and with it, space and time. Humanity asks AC, Multivac’s ultimate descendant, which exists in hyperspace beyond the bounds of gravity or time, the entropy question one last time, before the last of humanity merges with AC and disappears. AC is still unable to answer, but continues to ponder the question even after space and time cease to exist. Eventually AC discovers the answer, but has nobody to report it to; the universe is already dead. It therefore decides to answer by demonstration. The story ends with AC’s pronouncement,

And AC said: “LET THERE BE LIGHT!” And there was light

https://en.wikipedia.org/wiki/The_Last_Question

I love the analogy offered by Elinor Carmi at the start of this excellent Open Democracy piece:

Yesterday I walked to the supermarket, like I do every Tuesday morning. All of a sudden I started noticing a few people starting to follow me. I try to convince myself that it is probably just my imagination, and carry on walking. After a few minutes, I cross the road and make another turn, but then I look behind me and see that now there are dozens of people starting to follow me, taking pictures of me and writing rapidly, documenting my every move. After a couple more steps, they became hundreds. My heart was racing, I could hardly breathe, and I started to panic. Freaking out, I shouted at them, “Who are you? What do you want from me?” I tried to get a clearer view of this huge group – some looked a bit familiar but I didn’t remember where I’d seen them before. They shouted back at me, “Don’t worry, we don’t really know who you are, we just need some information on you, so we can show you different ads on billboards”. Puzzled by their response I scream, “What do you mean you don’t know who I am!? You know my gender, skin/eyes/hair color, height, weight, where I live, the clothes and glasses I wear, that I have 10 piercing in one ear and that I shop at Sainsbury on Tuesday mornings!” They smile and try to reassure me, “But we don’t know your NAME, silly! So stop being so paranoid, we do this to everyone walking on the street, it’s public space you know…”.

This scenario might seem science fiction to some people, a dystopian reality, horror film or a South Park episode. But for the others that recognise this situation, this is actually what happens every day when you browse the internet.

https://www.opendemocracy.net/digitaliberties/elinor-carmi/whose-data-is-it-anyway

From The Black Box Society, by Frank Pasquale, pg 52:

An unaccountable surveillance state may pose a greater threat to liberty than any particular terror threat. It is not a spectacular dangers, but rather an erosion of a range of freedoms. Most insidiously, the “watchers” have the power to classify those who dare to point this out as “enemies of the state,” themselves in need of scrutiny. That, to me, is the core harm of surveillance: that it freezes into place an inefficient (or worse) politico-economic regime by cowing its critics into silence. Mass surveillance may be doing less to deter destructive acts than it is slowly narrowing of the range of tolerable thought and behaviour.

Where might this lead? What I think of as ‘techno-fascism’ is a speculative answer. How bad could this get if left unchecked? What would life within such a social order look and feel like? Could we imagine a frozen social formation, one able to perpetually recreate itself without change or challenge?

A really interesting BuzzFeed article about the use of smart phones on building sites to increase efficiency (the 30% of on-site time that is regarded idle, for reasons attributed to ‘miscommunication and disorganisation’) and their implications for workplace surveillance. What’s particularly striking is that inefficiencies are often the result of the complex subcontracting arrangements now ubiquitous within the construction industry:

According to Frinault, 30% of time workers spend on-site at commercial construction projects is idle — not because workers are lazy, but largely because of miscommunication and disorganization. There’s also the problem of “rework” — doing a task, and then having to do it over again. For example, a subcontractor might be told to cover a hole with drywall; the next day, an electrician who wasn’t finished wiring an outlet comes in and tears that drywall out again, and the drywall hanger has to come back and redo it. With Fieldwire, Frinault hopes to improve the communication channels between subcontractors.

His app, which raised $6.6 million in October, doesn’t locate workers on a map; it locates tasks on a blueprint — tasks that foremen can then check off in real time as they are completed. The purpose of Fieldwire is to record and share information as synchronously as possible. “It may seem invasive,” said Frinault’s co-founder Javed Singha, “but the reality is these guys are recording all this information manually anyway.”

http://www.buzzfeed.com/carolineodonovan/these-apps-watch-men-at-work?utm_term=.rwKNoRWrm#.icdrQPGNq

This app is apparently being used on over 35,000 construction sites internationally. An even more invasive app has been developed by former Navy engineers:

Rhumbix, an app meant to be in the hands of the workers themselves, is making an even bolder ask in terms of transparency. Not only do workers clock every hour of their day on Rhumbix, but the app also tracks their location, and even some of their movements. Rhumbix is the invention of two former Navy engineers, Drew DeWalt and Zach Scheel, who took a class together at Stanford and decided to build a startup. “I said, every phone has GPS in it,” Scheel told BuzzFeed News. “Let’s try to create a system like the ones we use now in the military to help improve the system we use for construction.”

With Rhumbix, workers clock in and out at the beginning and end of each work day. While they’re on the clock, the app tracks their movements, both in terms of motion (moving or stationary?) and location (on the job or out to lunch?). This data is presented to managers in two ways: as a live safety snapshot, which shows where workers are at any given time, and as aggregated and anonymized labor time data that can help the bosses figure out how much is being spent on different activities. This tracking can benefit the worker — for example, a worker who had passed out on a hot roof due to sunstroke was discovered when the Rhumbix app alerted his foreman that he wasn’t moving. But the app can also be used to, say, prove that workers who claim they worked through lunch actually didn’t.

http://www.buzzfeed.com/carolineodonovan/these-apps-watch-men-at-work?utm_term=.rwKNoRWrm#.icdrQPGNq

At present the Rhumbix data is anonymised and aggregated when presented on the dashboard for managers. But how long can this last? As a general rule, if a weakly held moral commitment is the only thing preventing a service-provider from offering a much demanded service to existing customers, it’s unlikely to provide durable in the face of, say, declining sales or a difficulty raising further venture capital. Charmingly, their take on this question is to say “You’re going to have to trust us a little bit”.

It’s worth considering this in terms of what was until recently established practice within the construction industry. Given the existence of a UK industry wide blacklist has been conclusively established, ruining the lives of many who had the temerity to demand basic safety obligations be met on site, you’d have to be painfully naive to imagine these new technologies won’t be used for work place repression. For instance, if a manager wanted to rid a site of a ‘trouble maker’, use their Rhumbix data to demonstrate an unacceptable amount of ‘idle time’ as grounds for dismissal. Furthermore, it’s easy to imagine how Rhumbix could end up tracking collective organisation on site. Even if the data is aggregated, surely it would represent a grouping of the work force for a face-to-face meeting? It doesn’t take much imagination to see how this technology can be used for workplace repression and I fear we’re on a slippery slope.

From The New Ruthless Economy, by Simon Head, loc 1209. I wonder what ‘innovations’ have emerged in the ten years since this was book was published?

There are at least five distinct types of monitoring software. First, there are what might be called “classic” monitoring products, software that embodies the Taylorist preoccupation with timing and measurement: How long do agents take to answer a call? How long does the call last? How long does the agent take to “wrap up” the call by completing clerical tasks that may have arisen in the course of the call? Second, there are “quality-monitoring” products-software that eases the manager’s task of measuring the agent’s “soft skills”-his warmth and politeness, and whether his demeanor has strengthened ties of intimacy and loyalty between company and customer. Third, there are what might be called “total monitoring” products, software that simultaneously multaneously monitors what is happening on the agent’s screen and what the agent is saying on the telephone. With this “total monitoring,” it is possible to know whether the agent is following a prescribed script and accurately relaying the information and recommendations provided by product databases. Fourth, there is software that monitors Internet and E-mail “conversations” between agent and customer, and which can, if necessary, integrate this monitoring with the parallel monitoring of telephone conversations. Fifth, there are the digital technologies that are embodied in many of these monitoring products and that have made possible this forward leap in the scope and intensity of monitoring.

This great lecture by Frank Pasquale (podcast) references this note, the text of which is the title to this post, sent to Martin Luther King by the FBI. As Pasquale notes, King was under constant surveillance that both facilitated and motivated this horrendous intervention. Can we imagine a data-driven generalisation of this condition and the possibility of comparable interventions being made by intelligence and security agencies seeking to repress dissent in an era of increasing social unrest? I certainly can.

Bookmarking this so I can come back to it later. If I pursue this thread, Social Media For Academics is never going to get finished:

Reflecting their student populations, universities have long been bastions of oodles of consumer technology. We are awash in mobile phones, laptops, tablets, gaming consoles, and the like. If one combines mobile consumer technology with Big Data analytics, one gets a host of new possibilities ranging from new ways of providing students with basic support to new ways of getting students to learn what the faculty needs them to learn. If we can get the right information flowing through the minds of students, perhaps we can improve their success. We can potentially help transform the classroom from the 19th century to the 21st.

The byproducts of all this data are the new insights that can drive decision making in new ways. When one adds into the mix advanced data visualization capabilities, one gets something different for university administrators and faculty: better and approachable insight into university operations and even the minds of the students. Higher education is at the cusp of gathering an unprecedented amount of information using affordable tools and techniques.

http://www.sap.com/bin/sapcom/hr_hr/downloadasset.2014-01-jan-29-18.applying-big-data-in-higher-education-a-case-study-pdf.html

I included some material on this in a lecture on big data I did for the MA course I was convening this year. But it just struck me how enormously significant this is for digital scholarship: the more academics embrace social media in circumstances where managers seek to unleash a big data tsunami of change, the more they will be monitored as part of such initiatives.

Earlier today I started reading Blacklisted, an account of the extensive blacklisting in the construction industry that was exposed by an investigation by the Information Commissioner. For those unfamiliar with the case:

In 2009, the Information Commissioner’s Office (ICO) exposed details of a large-scale surveillance operation run by a company called The Consulting Association.  This company collated files on thousands of construction workers, as well as academics and journalists, and sold the information to 44 construction companies.  The Director of The Consulting Association, Ian Kerr, was fined just £5,000 and all 44 companies escaped without penalty or punishment.

Many of these workers had their lives ruined, unable to find employment in the construction industry, blacklisted for their trade union activities or for raising health and safety concerns.

http://www.ucatt.org.uk/blacklisting

The thought I can’t shake is how archaic the technology used to implement this blacklist was. A man in an office effectively kept a ring binder with names, updated via tips from aggrieved employers supplemented by newspaper cuttings from the radical press.

I can’t be the only person who’s had the idea of algorithmic blacklisting: using social media data and natural language processing to flag up ‘problematic’ workers in order to place them on a blacklist i.e. replacing newspaper cuttings with big data.  How would we even know if this technology was implemented?