I love the analogy offered by Elinor Carmi at the start of this excellent Open Democracy piece:

Yesterday I walked to the supermarket, like I do every Tuesday morning. All of a sudden I started noticing a few people starting to follow me. I try to convince myself that it is probably just my imagination, and carry on walking. After a few minutes, I cross the road and make another turn, but then I look behind me and see that now there are dozens of people starting to follow me, taking pictures of me and writing rapidly, documenting my every move. After a couple more steps, they became hundreds. My heart was racing, I could hardly breathe, and I started to panic. Freaking out, I shouted at them, “Who are you? What do you want from me?” I tried to get a clearer view of this huge group – some looked a bit familiar but I didn’t remember where I’d seen them before. They shouted back at me, “Don’t worry, we don’t really know who you are, we just need some information on you, so we can show you different ads on billboards”. Puzzled by their response I scream, “What do you mean you don’t know who I am!? You know my gender, skin/eyes/hair color, height, weight, where I live, the clothes and glasses I wear, that I have 10 piercing in one ear and that I shop at Sainsbury on Tuesday mornings!” They smile and try to reassure me, “But we don’t know your NAME, silly! So stop being so paranoid, we do this to everyone walking on the street, it’s public space you know…”.

This scenario might seem science fiction to some people, a dystopian reality, horror film or a South Park episode. But for the others that recognise this situation, this is actually what happens every day when you browse the internet.


An interesting idea from Craig Lambert’s Shadow Work loc 3116 which deserves to be explored in greater depth:

As noted earlier, philosopher John Locke argued that labor creates property; taking his view, if your shadow work made some information, it is your possession. In fact, who owns your data—your informational body—may some day be as contentious an issue as the ownership of petroleum, water, or any other natural resource.

This looks superb:

Open Track: The Lives and Deaths of Data

Convenors: Sabina Leonelli and Brian Rappert, Department of Sociology, Philosophy and Anthropology & Exeter Centre for the Study of the Life Sciences (Egenis), University of Exeter, UK (see also the Exeter Data Studies group: http://www.datastudies.eu

Abstract: This track investigates the relational constitution of data: how stages in the life of data articulate to one another and the challenges involved in storing, moving, classifying, manipulating and interpreting them. The session is intended to explore the collectivities emerging through data collection, dissemination, assemblage and analysis. Analysing the ways in which information becomes taken as given things, the manner in which data and their varying contexts of use are co-constituted, and the means by which utility is invested and divested in them provides a platform to explore and challenge the powers attributed to “Big” and “Open” data by governments, lobby groups and institutions around the world. With its long standing attention to the conditions of knowledge production, STS scholarship is well positioned to reflect on the value(s) attributed to data under a variety of different circumstances, how such attribution changes in time, and what this indicates about the properties of the objects being identified and used as ‘data’ and of the communities involved in such identification and use. Questions to be addressed include: What would it mean to speak of the birth of data? How do they develop, especially when they are used for a variety of purposes by different stakeholders? Do they ever cease to be data, and how can we conceptualize situations in which data are dismissed, forgotten, erased, lost or regarded as obsolete? This session will be organised as a set of individual presentations encompassing several different aspects and areas of data use.

Process: The deadline for submitting an abstract is 21 February 2016. If you want to participate in this open track then you will need to select it when you submit your abstract to the 4S/EASST Conference. Instructions for submission of your abstract are available at http://www.sts2016bcn.org/

. If you would like to discuss the relevance of your paper to the open track, then please contact either or both of us: s.leonelli@exeter.ac.uk

 and b.rappert@exeter.ac.uk

I’m reading Untangling the Web, by Aleks Krotoski, as an accessible precursor to beginning to engage with the social psychological literature on online behaviour. It’s proving to be an enjoyable read so far, though maybe not quite as much of a pop social psychology book as I had hoped it would be. It’s more of a collection of thoughtful tech journalism than anything else. But I just came across a good example of what I was initially looking for: different (mutually compatible) social psychological explanations for why people are so blasé about their data. From page 133-134:

There are indeed a few things that are psychologically unique about interacting via machines. First, we don’t expect consequences. The web feels ephemeral, separate from so- called real life. What happens online stays online. That’s totally untrue, of course. As we continue to intertwine our lives with technology, our virtual and physical selves evolve into the same beast, and therefore it’s impossible to separate the consequences that affect one from the other. Something said or done in one place can easily be taken out of context and dropped into another. Ask the many people who’ve been fired from their jobs for posting party pictures on their Facebook timelines.

Second, according to the Ohio study, online we experience an extreme version of the so- called “third person effect”: we rationalise, through our infernal, eternal human nature, that if something’s going to go wrong, it’ll happen to the other guy. So we won’t change our privacy settings on a social network or turn off cookies on our browsers to keep the details of our surfing away from advertisers: only when we experience a personal violation will we be more careful to protect ourselves and our information.

Third, we’re unable to imagine the vastness of the potential audience we communicate with when we’re online, so we treat the computer like a confidant, a confessor. We have an intimate relationship with our computer terminals; our laptops, mobile phones, desktops and tablets feel private, and the networks we hang out in feel closed. In order to make a connection with others, we feel it’s OK to share private information. “We think the web is a kind of conversation,” explains Dr Kieran O’Hara, a philosopher and web scientist at the University of Southampton. “It feels a bit like writing letters, a bit like a telephone conversation. But it’s all that and much more.”

As the article suggests, this initiative may be the result of the threat posed by Apple music. What interests me is how totally open-ended this is: how do we perceive and evaluate risks when policies take such a form?

Sections 3.3 and 3.4 of Spotify’s privacy policy say that the app will now collected much more data about its users.

The section ‘Information Stored on Your Mobile Device’ says that: “With your permission, we may collect information stored on your mobile device, such as contacts, photos or media files.”

In the next section, it says: “We may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g Bluetooth).”


From Plutocrats: The Rise of the New Global Super-Rich pg 46:

Carlos Slim, who studied engineering in college and taught algebra and linear programming as an undergraduate, attributes his fortune to his facility with numbers. So does Steve Schwarzman, who told me he owed his success to his “ability to see patterns that other people don’t see” in large collections of numbers. People inside the super- elite think the rise of the data geeks is just beginning. Elliot Schrage is a member of the tech aristocracy— he was the communications director for Google when it was the hottest company in the Valley and jumped to the same role at Facebook just as it was becoming a behemoth. At a 2009 talk he gave to an internal company meeting of education and publishing executives, Schrage was asked what field we should encourage our children to study. His instant answer was statistics, because the ability to understand data would be the most powerful skill in the twenty- first century.

How does this intersect with the (purported) rise of the data scientist as the ‘sexist job of the 21st century‘?

This insightful article paints a worrying picture of the growth of data-driven policing. The technical challenge of “building nuance” into data systems “is far harder than it seems” and has important practical implications for how interventions operate on the basis of digital data. What I hadn’t previously realised was how readily investigators are using social media on their own initiative above and beyond the systems that are being put into place with the help of outside consultancies: only 9% of police using social media in investigations had received training from their agency. Furthermore the discussion of the life span of data raised some really interesting (and worrying) questions about the organisational sociology of data-driven policing given what seems likely to be increasing involvement of the private sector in policing in the UK:

For the kid listed in a gang database, it can be unclear how to get out of it. In the world of human interaction, we accept change through behavior: the addict can redeem himself by getting clean, or the habitual interrupter can redeem himself by not interrupting. We accept behavior change. But in the database world, unless someone has permission to delete or amend a database record, no such change is possible. Credit agencies are required to forgive financial sins after 7 years. Police are not—at least, not consistently. The National Gang Center, in its list of gang-related legislation, shows only 12 states with policies that specifically address gang databases. Most deny the public access to the information in these databases. Only a few of these twelve mention regular purging of information, and some specifically say that a person cannot even find out if they have a record in the database.

This permanence does not necessarily match real-world conditions. Kids cycle in and out of street gangs the way they cycle in and out of any other social group, and many young men age out of violent behavior. Regularly purging the gang database, perhaps on a one-year or two-year cycle, would allow some measure of computational forgiveness. However, few institutions are good at keeping the data in their databases up-to-date. (If you’ve ever been served an ad for a product you just bought, you’re familiar with this problem of information persistence and the clumsiness of predictive algorithms.) The police are no worse and no better than the rest of us. Criminologist Charles Katz found that despite a written department policy in one large Midwestern police gang unit, data was not regularly audited or purged. “The last time that the gang unit purged its files, however, was in 1993—approximately 4 years before this study was conducted,” he wrote. “One clerk who is responsible for data entry and dissemination estimated, ‘At a minimum, 400 to 500 gang members would be deleted off the gang list today if we went through the files.’ Accordingly, Junction City’s gang list of 2,086 gang members was inflated by approximately 20% to 25%.”


This suggests to me that any adequate evaluation of data-driven policing needs to take questions of organisational sociology and information technology extremely seriously. What matters is not just the formulation of data management policies but what we know about how such policies tend to be implemented under the specific conditions likely to obtain in policing. Given the broader trend towards the privatisation of policing, it is increasingly important that we understand how sharing of data operates across organisational boundaries, how it is prepared and how it is perceived by end-users.

My fear is that a form of inter-organisational ‘black-boxing’ could kick in where those utilising the data for interventions trust that others have elsewhere taken responsibility for ensuring its reliability. What scrutiny would the operations of outside suppliers be subject to? Could privatisation intensify the rush towards data-driven policing in the name of efficiency savings? Would a corresponding centralisation of back-office functions compound the aforementioned epistemological risks entailed by outsourcing? These are all urgent questions which could easily be marginalised as budgetary constraint drives ‘innovation’ in policing: data-driven policing and privatised policing will likely go hand-in-hand and we need to analyse them as such.

The Politics of Data (Science)

This special issue of Discover Society will explore the political implications of ‘big data’ and the systems of expertise emerging around it, including though not limited to Data Science. In doing so it will aim to bridge the gap between the methodological discourse surrounding data science and the political discourse beginning to emerge around ‘big data’. Here are some of the questions the issue will address:

– How is ‘big data’ understood and acted upon? How should we understand its cultural power?
– How is ‘big data’ reconfiguring the social sciences? Do we risk all science becoming data science?
– How and why has the ‘data scientist’ come to be seen as the ‘sexiest job of the 21st century’?
– Is the ‘data scientist’ just a ’Statistician who lives in Shoreditch?’ Or is this a genuinely new intellectual role?
– Can ‘big data’ address ‘big questions’? If not, is this a problem?
– What are the precursors of ‘data science’ within the academy and/or within corporations?
– What implications does corporate data science have for the relationship between corporations & consumers?
– What implications does national security data science have for the relationship between the state & citizens?
– Can the use of digital data lead to efficiency savings in public services? How does this relate to the politics of austerity?
– How could predictive privacy harms emerging from data analytics be addressed politically?
– Can the opacity of algorithmic processes be challenged? Or are we heading inexorably for a ‘black-box society’?
– How are new forms of digital data reconfiguring activity in particular social environments?

However these are just suggestions and ideas beyond the scope of this list are very welcome.

The deadline for contributions is June 15th. Contact mark@markcarrigan.net to discuss a potential contribution.

The article will constitute the July issue of Discover Society. Most articles will be 1500 words however there are a number of special sections in the online magazine.

Front line – 1500 words
View point – 1500 words
Policy briefing – 1500-2000 words

If you would be interested in writing one of these thematic sections, please get in touch asap.

The issue will follow the usual formatting guidelines of Discover Society. Please consult the notes for contributors.

This looks really interesting – if I wasn’t drowning under the weight of existing writing commitments, I’d love to try and write something for the final topic suggestion:

Call for papers for special issue of IEEE Internet Computinghttp://www.computer.org/portal/web/computingnow/iccfp6

Internet of You: Data Big and Small

Final submissions due:  1 March 2015
Publication issue:  November/December 2015

Please email the guest editors a brief description of the article you plan to submit by 1 February 2015.
Guest editors: Deborah Estrin and Craig Thompson (ic6-2015@computer.org).

We are at a great divide. Where our ancestors left behind few records, we are creating and preserving increasingly complete digital traces and models of almost every aspect of our lives. This special issue of IEEE Internet Computing aims to explore technologies and issues from small user-centric models of individuals to real-time analytics on huge aggregations of user data. At present, some are aspiring to create immortal avatars by letting you record everything about yourself and convert it into a model that’s queriable, conversant, and possibly even active in gaining new experiences for itself. Meanwhile, others are equally concerned with stemming the tide of third-party data aggregation of individual models to mitigate risks that can evolve from this kind of near total information awareness.

This special issue seeks original articles that explore both small data (individual-scale data sources, processing, and modeling) and big data (community level aggregation and analytics). Topics include

  • diverse data sources and digital traces, including email, Facebook, financial, health, location, images, sound, consumer transactions, and interests;
  • methods to combine trace data into complete models; data liberation; kinds of user models, such as the physical self, memories, aspect-limited versus comprehensive models; and data quality, including managing history, change, comprehensiveness, and accuracy;
  • methods to aggregate and process heterogeneous data sets, stages of life ontologies, the scope and purpose of these data collections, available public data sources;
  • usage models for experience sampling — proximity, context, activity sensing, quantified self, situation-aware modeling, activities of daily living, my immortal avatar, workflows, and pattern learning;
  • representation technologies such as agents, smartphones, wearable computing, personal sensing networks, pattern representation and adaptation, and natural language;
  • new kinds of applications that draw insights from data analytics— including, recommendation systems, personalized health, real-time marketing, and predicting elections from twitter feeds;
  • open architectures for personalization, the role of cloud computing, relevant emerging standards;
  • concerns regarding privacy and surveillance, the extent of privacy erosion, taxonomy of privacy threats, and incentives and disincentives for sharing, the right to forget, and status of legal safeguards;
  • privacy and security technology safeguards, including identity management, disclosure control, privacy-preserving data mining, de-identification, new security models, mechanisms that audit and control personal information flows and usage; and
  • social and philosophical implications for humans’ conception of self. Is there a natural boundary between user models and world models?

Submission Guidelines

All submissions must be original manuscripts of fewer than 5,000 words, focused on Internet technologies and implementations. All manuscripts are subject to peer review on both technical merit and relevance to IC’s international readership — primarily practicing engineers and academics who are looking for material that introduces new technology and broadens familiarity with current topics. We do not accept white papers, and we discourage strictly theoretical or mathematical papers. To submit a manuscript, please log on to ScholarOne (https://mc.manuscriptcentral.com:443/ic-cs) to create or access an account, which you can use to log on to IC’s Author Center (http://www.computer.org/portal/web/peerreviewmagazines/acinternet) and upload your submission.

An absolute must read essay in the Guardian by the novelist John Lanchester who was given access to the GCHQ files by the Guardian:

The totalitarian state in Orwell’s Nineteen Eighty-Four would need no broader legal justification than that: it really does allow a government to do anything it likes. It was at this point that I became convinced that Snowden’s revelations are not just interesting or important but vital, because the state is about to get powers that no state has ever had, and we need to have a public debate about those powers and what their limits are to be.

Lanchester reads Snowden files:

At a moment of austerity and with a general sense that our state’s ability to guarantee prosperity for its citizens is in retreat, that same state is about to make the biggest advance ever in its security powers. In public, the state is shrinking; in private, it is shrinking until it gets just small enough to fit into our phones, our computers, our cars, our fridges, our bedrooms, our thoughts and intentions.


People misunderstand what a police state is. It isn’t a country where the police strut around in jackboots; it’s a country where the police can do anything they like. Similarly, a security state is one in which the security establishment can do anything it likes.

We are right on the verge of being an entirely new kind of human society, one involving an unprecedented penetration by the state into areas which have always been regarded as private. Do we agree to that? If we don’t, this is the last chance to stop it happening. Our rulers will say what all rulers everywhere have always said: that their intentions are good, and we can trust them. They want that to be a sufficient guarantee.