My notes on Mantello, P. (2016). The machine that ate bad people: The ontopolitics of the precrime assemblage. Big Data & Society. https://doi.org/10.1177/2053951716682538

Since 9/11 the politics of prediction and risk have created an alliance between security agencies, technology firms and other commercial actors which seeks to create a precrime assemblage: the first generation sought to identify threats through data mining (“search habits, financial transactions, credit card purchases, travel history, and email communications”) but the next generation are “becoming intelligent assemblages capable of integrating data from a multitude of nodes in order to foresee and preempt harmful futures” (pg 2). These advances are being facilitated through cloud computing, machine learning and limitless storage.

The beta versions of these assemblages are being tested in real world situations, rendering it urgent for us to understand their implications. The first is what it means for criminal justice as a whole when the focus is on the anticipation of crime rather than dealing with its occurrence after the fact. The second is the expansion of surveillance into everyday life driven by the public-private alliances which are driving the agenda. The scope of surveillance is increasing but so too is to civic participation in it, driven by gamified mechanisms which “encourages citizens to do the securitization footwork of the state by offering them the opportunity to participate in do-it-yourself, reward-centered, pro-active, networked and, at times, and gamified versions of automated governance” (pg 2).

Peter Mantello argues that the allure of technological innovation is legitimating these developments, promising greater impartiality and efficiency, while the reality of their operation is extending juridicial reach in order to identify non immediate threats to the established order. The pre-crime assemblage will function “to preserve the domains of its masters, who will control immense existential and predictive data that will allow them to shape public perceptions, mold social behavior, and quell possible opposition, thereby ensuring the exception incontrovertible and infinite life” (pg 2).

He uses Massumi’s conception of ontopower to theorise this process, “a mode of power driven by an operative logic of preemption is spreading throughout the various structures, systems, and processes of modern life” (pg 3). Pre-emption itself is long standing but the preoccupation with speculative feelings of non imminent threats was, he argues, born out of the reaction to 9/11. If I understand correctly, the point is that risks are increasingly pre-empted rather than managed, with risk management becoming an anticipatory lens through actors and organisations proactively prepare for imagined futures.

Exceptionalism becomes legitimate under these circumstances, as anticipated threats are used to justify actions which would have otherwise been regarded as illegitimate. A mechanism like the “public safety orders” enacted by the New South Wale police expand the principle of anti-terror policing to civic law enforcement: “they shift the balance further away from the principles of due process where people are innocent until proven guilty and more toward a new era where crimes are committed before they happen, citizens are disappeared without recourse to defense, and where guilt and imprisonment are based on suspicion, rumor, association, or simply left to the intuitive ‘gut feeling’ of police officers” (pg 4). This goes hand-in-hand with an affirmation of the unpredictability of the future. Randomness and uncertainty mean that crimes cannot be avoided but this is why anticipatory work is seen as so important to minimise the threats on the horizon.

This anticipatory work tends to diffuse responsibility into an apparatus of knowledge production, identifying networks of connections or regional hot spots which become the locus of an intervention. A whole range of assets are deployed in the preparation of these interventions, as described on pg 5 in the case of Hitachi’s Public Safety Visualization Suite 4.5:

This includes mining data from an array of various nodes such as remote video systems (hotels/city streets/commercial and private properties/transporta- tion lines), gunshot sensors that alert CCTV cameras, vehicle license plate recognition systems, wireless com- munications, Twitter and other social media, mobile surveillance systems as well as useful data from smart parking meters, public transit systems, and online newspapers and weather forecasts.

Data visualisation plays a crucial role in this by “compressing vast amounts of invisible data into visible signifiers” (pg 5). However the uncertainty, ambiguity and construction which characterises the data itself is lost in the apparent self-evidence of the ensuing representations. The navigability, scalability, and tactility of the interface then mediates interaction with this experienced reality. The performative power falls away, as diverting police resources to ‘hotspots’ only to discover ‘more crime’ there (either comparable to what could be found elsewhere or encouraged by the aggravating factor of heavy handed police) comes to function as a legitimation of the apparatus itself. The approach also compounds existing inequalities through its reliance on historical apparatus about patterns of arrest in order to predict future offending.

What I found fascinating was the slippage in the software. An example on pg 6 concerns ‘at risk’ lists, intended to be the basis for social service interventions prior to any policing action, instead being used as target lists for people who were assumed to be likely offenders. This on the ground slippage highlights the importance of understanding the organisational context within which new tools are deployed, as a means to understand how their original intentions may mutate in the context of application.

The terrifying turn underway is from the deployment of past data to the harvesting of present data in real time. As Mantello puts it, this involves “the real-time extraction of personal data from an individual’s daily life—monitoring their patterns, routines, habits, emotional tendencies, preferences, idiosyncrasies, and geo- spatial coordinates” (pg 7). Enthusiasts claim that the broader the data that is harvested, the easier it will be to identify ‘criminal signatures’ at ever earlier points in time. This converges with what Zuboff has called surveillance capitalism in which behavioural data is leveraged to persuade rather than simply to predict. How might this modus operandi be enacted as part of the pre-crime assemblage? There is a truly dystopian horizon to such a project, described on pg 7:

Yet there is also the distinct dystopian possibility, in its never- ending ontopolitical pursuit to colonize and regulate all aspects of social life, that it may suppress dissent and discourage nonconformist thought or behavior. Already we are seeing such practices occur today with the increasing trends of self-censorship in social media due to fear of state surveillance and authoritarian reprisal

The gamified form this takes can be seen in Sesame Credit, produced in collaboration with Alibaba, as part of the early stages of China’s opt in social credit system, with rewards on offer for those who perform in ways that meet expectations. But as this becomes mandatory in 2020, we can expect this to go hand-in-hand with the proactive avoidance of people deemed to have poor social credit and potential sites where negative social credit behaviours may thrive. The author also considers the example of opt-in blackboxes in cars, where rewards on offer for those who agree to such monitoring but which eventually may be rolled out for everyone as part of a transformation of insurance. The City of Boston security app, Citizen Connect, offers ‘street cred’ recognition points for repeated contributions: “users who actively report on suspicious persons, ongoing crime, random acts of violence, or municipal infrastructure hazards get promoted to special ‘‘patrols’’ where they earn special badges of civic distinction” (pg 9).

I imagine it would be an only slightly more extreme version of what’s being sought for the garden bridge in London:

Visitors to the garden bridge in London will be tracked by their mobile phone signals and supervised by staff with powers to take people’s names and addresses and confiscate and destroy banned items, including kites and musical instruments, according to a planning document.

The lengthy document (pdf) submitted as part of the planning process for the bridge, which will be part-financed by at least £40m of public money, said the trust behind the scheme hoped to “maximise the opportunity provided by the status of the bridge as private land” by imposing rules to “establish expectations for behaviour and conduct”.

If it goes ahead, people’s progress across the structure would be tracked by monitors detecting the Wi-Fi signals from their phones, which show up the device’s Mac address, or unique identifying code. The Garden Bridge Trust says it will not store any of this data and is only tracking phones to count numbers and prevent overcrowding.

http://www.theguardian.com/uk-news/2015/nov/06/garden-bridge-mobile-phone-signals-tracking-london

There would be more rules, as well as greater penalties attached to their  enforcement. If this vision of the garden bridge comes to pass, I wonder what escalation we’ll seen in the next project of comparable stature?

The bridge rules, of which 30 are listed, include a prohibition on any exercise other than jogging, playing a musical instrument, taking part in a “gathering of any kind”, giving a speech or address, scattering ashes, releasing a balloon and flying a kite.

They would be enforced by visitor hosts, who would be qualified under the government’s community safety accreditation scheme (CSAS). Under this, police can grant powers to civilians involved in crowd control so they can issue fines for offences such as littering, and can require suspected wrongdoers to give their name and address.

The planning document confirms that visitor hosts could impose fixed penalty notices and order anyone breaking the bridge rules to give their personal details. If the infraction involves a banned item, the host “may seize and dispose of that property in line with CSAS enforcement powers”, it says.

An “enhanced” CCTV system would monitor visitors for lawbreaking or prohibited activities, the document adds.

http://www.theguardian.com/uk-news/2015/nov/06/garden-bridge-mobile-phone-signals-tracking-london

This insightful article paints a worrying picture of the growth of data-driven policing. The technical challenge of “building nuance” into data systems “is far harder than it seems” and has important practical implications for how interventions operate on the basis of digital data. What I hadn’t previously realised was how readily investigators are using social media on their own initiative above and beyond the systems that are being put into place with the help of outside consultancies: only 9% of police using social media in investigations had received training from their agency. Furthermore the discussion of the life span of data raised some really interesting (and worrying) questions about the organisational sociology of data-driven policing given what seems likely to be increasing involvement of the private sector in policing in the UK:

For the kid listed in a gang database, it can be unclear how to get out of it. In the world of human interaction, we accept change through behavior: the addict can redeem himself by getting clean, or the habitual interrupter can redeem himself by not interrupting. We accept behavior change. But in the database world, unless someone has permission to delete or amend a database record, no such change is possible. Credit agencies are required to forgive financial sins after 7 years. Police are not—at least, not consistently. The National Gang Center, in its list of gang-related legislation, shows only 12 states with policies that specifically address gang databases. Most deny the public access to the information in these databases. Only a few of these twelve mention regular purging of information, and some specifically say that a person cannot even find out if they have a record in the database.

This permanence does not necessarily match real-world conditions. Kids cycle in and out of street gangs the way they cycle in and out of any other social group, and many young men age out of violent behavior. Regularly purging the gang database, perhaps on a one-year or two-year cycle, would allow some measure of computational forgiveness. However, few institutions are good at keeping the data in their databases up-to-date. (If you’ve ever been served an ad for a product you just bought, you’re familiar with this problem of information persistence and the clumsiness of predictive algorithms.) The police are no worse and no better than the rest of us. Criminologist Charles Katz found that despite a written department policy in one large Midwestern police gang unit, data was not regularly audited or purged. “The last time that the gang unit purged its files, however, was in 1993—approximately 4 years before this study was conducted,” he wrote. “One clerk who is responsible for data entry and dissemination estimated, ‘At a minimum, 400 to 500 gang members would be deleted off the gang list today if we went through the files.’ Accordingly, Junction City’s gang list of 2,086 gang members was inflated by approximately 20% to 25%.”

http://www.theatlantic.com/politics/archive/2015/04/when-cops-check-facebook/390882/

This suggests to me that any adequate evaluation of data-driven policing needs to take questions of organisational sociology and information technology extremely seriously. What matters is not just the formulation of data management policies but what we know about how such policies tend to be implemented under the specific conditions likely to obtain in policing. Given the broader trend towards the privatisation of policing, it is increasingly important that we understand how sharing of data operates across organisational boundaries, how it is prepared and how it is perceived by end-users.

My fear is that a form of inter-organisational ‘black-boxing’ could kick in where those utilising the data for interventions trust that others have elsewhere taken responsibility for ensuring its reliability. What scrutiny would the operations of outside suppliers be subject to? Could privatisation intensify the rush towards data-driven policing in the name of efficiency savings? Would a corresponding centralisation of back-office functions compound the aforementioned epistemological risks entailed by outsourcing? These are all urgent questions which could easily be marginalised as budgetary constraint drives ‘innovation’ in policing: data-driven policing and privatised policing will likely go hand-in-hand and we need to analyse them as such.