I just came across this remarkable estimate in an Economist feature on surveillance. I knew digitalisation made surveillance cheaper but I didn’t realise quite how much cheaper. How much of the creeping authoritarianism which characterises the contemporary national security apparatus in the UK and US is driven by a familiar impulse towards efficiency?

The agencies not only do more, they also spend less. According to Mr Schneier, to deploy agents on a tail costs $175,000 a month because it takes a lot of manpower. To put a GPS receiver in someone’s car takes $150 a month. But to tag a target’s mobile phone, with the help of a phone company, costs only $30 a month. And whereas paper records soon become unmanageable, electronic storage is so cheap that the agencies can afford to hang on to a lot of data that may one day come in useful.

http://www.economist.com/news/special-report/21709773-who-benefiting-more-cyberisation-intelligence-spooks-or-their

In reality, it is of course anything but, instead heralding a potentially open ended project to capture the world and achieve the utopia of total social legibility. An ambition which always makes me think of this short story:

The story deals with the development of universe-scale computers called Multivacs and their relationships with humanity through the courses of seven historic settings, beginning in 2061. In each of the first six scenes a different character presents the computer with the same question; namely, how the threat to human existence posed by the heat death of the universe can be averted. The question was: “How can the net amount of entropy of the universe be massively decreased?” This is equivalent to asking: “Can the workings of the second law of thermodynamics (used in the story as the increase of the entropy of the universe) be reversed?” Multivac’s only response after much “thinking” is: “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”

The story jumps forward in time into later eras of human and scientific development. In each of these eras someone decides to ask the ultimate “last question” regarding the reversal and decrease of entropy. Each time, in each new era, Multivac’s descendant is asked this question, and finds itself unable to solve the problem. Each time all it can answer is an (increasingly sophisticated, linguistically): “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

In the last scene, the god-like descendant of humanity (the unified mental process of over a trillion, trillion, trillion humans that have spread throughout the universe) watches the stars flicker out, one by one, as matter and energy ends, and with it, space and time. Humanity asks AC, Multivac’s ultimate descendant, which exists in hyperspace beyond the bounds of gravity or time, the entropy question one last time, before the last of humanity merges with AC and disappears. AC is still unable to answer, but continues to ponder the question even after space and time cease to exist. Eventually AC discovers the answer, but has nobody to report it to; the universe is already dead. It therefore decides to answer by demonstration. The story ends with AC’s pronouncement,

And AC said: “LET THERE BE LIGHT!” And there was light

https://en.wikipedia.org/wiki/The_Last_Question

I love the analogy offered by Elinor Carmi at the start of this excellent Open Democracy piece:

Yesterday I walked to the supermarket, like I do every Tuesday morning. All of a sudden I started noticing a few people starting to follow me. I try to convince myself that it is probably just my imagination, and carry on walking. After a few minutes, I cross the road and make another turn, but then I look behind me and see that now there are dozens of people starting to follow me, taking pictures of me and writing rapidly, documenting my every move. After a couple more steps, they became hundreds. My heart was racing, I could hardly breathe, and I started to panic. Freaking out, I shouted at them, “Who are you? What do you want from me?” I tried to get a clearer view of this huge group – some looked a bit familiar but I didn’t remember where I’d seen them before. They shouted back at me, “Don’t worry, we don’t really know who you are, we just need some information on you, so we can show you different ads on billboards”. Puzzled by their response I scream, “What do you mean you don’t know who I am!? You know my gender, skin/eyes/hair color, height, weight, where I live, the clothes and glasses I wear, that I have 10 piercing in one ear and that I shop at Sainsbury on Tuesday mornings!” They smile and try to reassure me, “But we don’t know your NAME, silly! So stop being so paranoid, we do this to everyone walking on the street, it’s public space you know…”.

This scenario might seem science fiction to some people, a dystopian reality, horror film or a South Park episode. But for the others that recognise this situation, this is actually what happens every day when you browse the internet.

https://www.opendemocracy.net/digitaliberties/elinor-carmi/whose-data-is-it-anyway

I just came across this fascinating article, now 10 years old, detailing how former Google CEO Eric Schmidt cut off relations with CNET after a reporter  there had the temerity to detail the information she was able to find out about him via Google:

Last month, Elinor Mills, a writer for CNET News, a technology news Web site, set out to explore the power of search engines to penetrate the personal realm: she gave herself 30 minutes to see how much she could unearth about Mr. Schmidt by using his company’s own service. The resulting article, published online at CNET’s News.com under the sedate headline “Google Balances Privacy, Reach,” was anything but sensationalist. It mentioned the types of information about Mr. Schmidt that she found, providing some examples and links, and then moved on to a discussion of the larger issues. She even credited Google with sensitivity to privacy concerns.

When Ms. Mills’s article appeared, however, the company reacted in a way better suited to a 16th-century monarchy than a 21st-century democracy with an independent press. David Krane, Google’s director of public relations, called CNET.com’s editor in chief to complain about the disclosure of Mr. Schmidt’s private information, and then Mr. Krane called back to announce that the company would not speak to any reporter from CNET for a year.
http://www.nytimes.com/2005/08/28/business/digital-domain-google-anything-so-long-as-its-not-google.html?_r=0

At some point, I’ll collate all the cases like this I can find: it’s grimly fascinating to watch digital elites react with anger to the transparency they seek to impose on everyone else. In this case Schmidt seems to be rather averse to the assumptions about transparency that Google have long sought to inculcate in their users. From pg 177-178 of In The Plex:

Omitting a delete button was supposed to teach you to view email—and information itself—the way Google did. The implicit message was that the only thing that should be deleted was the concept of limited storage. Not everybody at Google subscribed to this philosophy—Eric Schmidt had long before instituted a personal practice of making his emails “go away as quickly as possible” unless specifically asked to retain them.

From pg 178 of the same book, concerning how Google see privacy organisations. Note how the epistemic asymmetry, in terms of access to and understanding of internal technical processes, allows criticism to be dismissed: 

To most people at Google, though, automatic archiving was a cause for celebration, and gripes from privacy do-gooders were viewed as misguided or even cynically—exploiting a phony issue for their own status and fund-raising. “Even to this day, I’ll read people saying that Google keeps your [deleted] email forever. Like, totally false stuff!” says Buchheit. Buchheit called his critics “fake privacy organizations” because in his mind “they were primarily interested in getting attention for themselves and were going around telling lies about things.”

From Untangling the Web, by Aleks Krotoski, pg 127-128:

As I wrote earlier in this book, if you stick “Aleks Krotoski” into an online search engine, you’ll be able to learn a lot about me. Along with basic biographical details such as where I was born and who my parents are, you can find out where I’ve worked, where I’ve lived, who I hang out with, who I’m close to, that I have a cat (and what his name is), what I like to do at the weekends, what kinds of food I like, and my email address and mobile phone number. Although in social network profiles I tend to hide behind an old close- up of a shock of pink hair (I dyed it until 2009), you’ll easily find what I look like from the snapshots that I and others have taken and uploaded, dating mostly from the last ten years but also from college and high school. With a little more digging, you’ll be able to figure out who’s in my extended family and what they do, where they live and what they’re interested in. You can easily find my home address. You might be able to get a sense of my routine – when I’m in my house and when I’m out. You’ll probably know when I’m on work trips. You might be able to pick up on which running routes are my favourites, and on which days and at what times I tend to follow those paths. It would, frankly, be easy to find me, if you were so inclined. Please don’t. But you can. In fact, if someone I didn’t know wanted to gain my trust, it’d be pretty easy to find out all this personal information and then spin it into a yarn. They might be online scammers who are trying to exploit me. They might be commercial services that want me to feel an emotional attachment to their brand. And that’s just using the information that we put out there ourselves.

One final snippet from The Boy Kings, by Katherine Losse, that I can’t resist posting. It seems that Mark Zuckerberg has a secret back room in his private Facebook office, allowing him to retreat into opacity while sustaining the glass fronted and open plan layout of the corporate offices:

Mark’s office sat adjacent to our pod, with its secret back room (for especially important meetings, because the front room of his office had a glass window onto the hallway that made meetings transparent) hidden behind a wallpapered door and a single table illuminated by a Mad Men –style modern lamp, receiving a constant stream of celebrities and tech luminaries and wealthy Russians in silk suits. (Pg 196)

This is the same Zuckerberg who bought four homes adjacent to his in order to ensure his own privacy. His own power dramatically illustrates the politics of transparency and opacity in digital capitalism. We can see this even more dramatically in the private retreats of the digital elites: if transparency gets tiring, why not just head off to your super yacht or Hawaii estate for a while? As Zuckerberg describes it, quoted on pg 198: “We are pushing the world in the direction of making it a more open and transparent place, this is where the world is going and at Facebook we need to lead in that direction.” The key terms here are pushing and lead. The pushers and the leaders are able to take a break when they’d like, without worrying about someone else perpetually trying to push and lead them.

I think this could be analysed in a similar way to how Bauman explored mobility in his work on globalisation: those at the bottom of the hierarchy are transparent because they lack the resources to escape the filter bubble, while those at the top of the hierarchy are usually transparent as a function of their own commercial success. But one condition is forced, leaving the people in question susceptible to manipulation, while the latter is chosen and can be voluntarily withdrawn from in private life.

As the article suggests, this initiative may be the result of the threat posed by Apple music. What interests me is how totally open-ended this is: how do we perceive and evaluate risks when policies take such a form?

Sections 3.3 and 3.4 of Spotify’s privacy policy say that the app will now collected much more data about its users.

The section ‘Information Stored on Your Mobile Device’ says that: “With your permission, we may collect information stored on your mobile device, such as contacts, photos or media files.”

In the next section, it says: “We may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g Bluetooth).”

http://www.independent.co.uk/life-style/gadgets-and-tech/news/spotify-has-announced-a-creepy-new-privacy-policy-and-people-are-worried-10464129.html

This post by Zeynep Tufekci on her Medium site is the best thing I’ve read yet about the recent facebook controversy.

I’m struck by how this kind of power can be seen as no big deal. Large corporations exist to sell us things, and to impose their interests, and I don’t understand why we as the research/academic community should just think that’s totally fine, or resign to it as “the world we live in”. That is the key strength of independent academia: we can speak up in spite of corporate or government interests.

To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!

That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.

https://medium.com/message/engineering-the-public-289c91390225

This strikes me as an important fault line, in so far as a superficial difference (i.e. whether or not this bothers you) tracks much broader divergences in political orientation which are likely to become more pronounced as these trends develop over time. However the risk is that this one contentious study becomes a distraction because, as Tufekci points out, this is something Facebook does on a daily basis. What could be lost here is a sense of the political apparatus coming into being and its broader implications:

I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).

The secret truth of power of broadcast is that while very effective in restricting limits of acceptable public speech, it was never that good at motivating people individually. Political and ad campaigns suffered from having to live with “broad profiles” which never really fit anyone. What’s a soccer mom but a general category that hides great variation?

With new mix of big data and powerful, oligopolistic platforms (like Facebook) all that is solved, to some degree.

Today, more and more, not only can corporations target you directly, they can model you directly and stealthily. They can figure out answers to questions they have never posed to you, and answers that you do not have any idea they have. Modeling means having answers without making it known you are asking, or having the target know that you know. This is a great information asymmetry, and combined with the behavioral applied science used increasingly by industry, political campaigns and corporations, and the ability to easily conduct random experiments (the A/B test of the said Facebook paper), it is clear that the powerful have increasingly more ways to engineer the public, and this is true for Facebook, this is true for presidential campaigns, this is true for other large actors: big corporations and governments. (And, in fact, a study published in Nature, dissected at my peer-reviewed paper linked below shows that Facebook can alter voting patterns, and another study shows Facebook likes can be used to pretty accurately model your personality according to established psychology measures).

The process of trying to rationalise mine over the last few days has left me newly aware of how outdated the username and password system is. With a lot of effort I’ve managed to get it down to 55 accounts with their own username and password, as well as a few that use Twitter or Google ID to sign in. I’ve also been surprised at my inability to delete my data & accounts on some sites.

This is an interesting read on alternatives to the traditional password system. It seems obvious that something systematic has to change here. As sinister as I find google’s attempt to establish the ubiquity of the Google ID, its continued rise seem inexorable just because I can’t see any scalable alternative to this problem other than widespread social sign in. I’ve also deleted a lot of accounts in the last few days which don’t use social sign in that I would have kept if this wasn’t the case.

I’ve been increasingly aware in the last six months of my reticence to sign up to anything that requires a new username and password. I feel like I’ve now begun to put this into practice, with a list of all my active accounts, as well as an inclination to cut further. Ideally I would like to have 2 step authentication on my core accounts (Twitter, WordPress, Google) and be able to sign into everything else using one of these.

What worries me is that I’m sure I must have missed some of the accounts.