A lovely passage from Lisa Gitelman at Loc 78 of her edited collection “Raw Data” Is An Oxymoron about the difficulty of going ‘off grid’ when the utilities of daily life leave us bound into the digital cage:

Try to spend a day “off the grid” and you’d better leave your credit and debit cards, transit pass, school or work ID, passport, and cell phone at home—basically, anything with a barcode, magnetic strip, RFID, or GPS receiver. 

An absolutely fascinating account of developments in the newsfeed algorith at Facebook since its introduction:

Adam Mosseri, Facebook’s 32-year-old director of product for news feed, is Alison’s less technical counterpart—a “fuzzie” rather than a “techie,” in Silicon Valley parlance. He traffics in problems and generalities, where Alison deals in solutions and specifics. He’s the news feed’s resident philosopher.

The push to humanize the news feed’s inputs and outputs began under Mosseri’s predecessor, Will Cathcart. (I wrote about several of those innovations here.) Cathcart started by gathering more subtle forms of behavioral data: not just whether someone clicked, but how long he spent reading a story once he clicked on it; not just whether he liked it, but whether he liked it before or after reading. For instance: Liking a post before you’ve read it, Facebook learned, corresponds much more weakly to your actual sentiment than liking it afterward.

After taking the reins in late 2013, Mosseri’s big initiative was to set up what Facebook calls its “feed quality panel.” It began in summer 2014 as a group of several hundred people in Knoxville whom the company paid to come in to an office every day and provide continual, detailed feedback on what they saw in their news feeds. (Their location was, Facebook says, a “historical accident” that grew out of a pilot project in which the company partnered with an unnamed third-party subcontractor.) Mosseri and his team didn’t just study their behavior. They also asked them questions to try to get at why they liked or didn’t like a given post, how much they liked it, and what they would have preferred to see instead. “They actually write a little paragraph about every story in their news feed,” notes Greg Marra, product manager for the news feed ranking team. (This is the group that’s becoming Facebook’s equivalent of Nielsen families.)

“The question was, ‘What might we be missing?’ ” Mosseri says. “‘Do we have any blind spots?’” For instance, he adds, “We know there are some things you see in your feed that you loved and you were excited about, but you didn’t actually interact with.” Without a way to measure that, the algorithm would devalue such posts in favor of others that lend themselves more naturally to likes and clicks. But what signal could Facebook use to capture that information?

http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html

Useful account of the role of ‘lead generators’ in generating ‘distinct digital-advertising landscapes’ with significant socio-economic ramifications. The filter bubble isn’t just a matter of cultural constraint:

As the big piles of data online continue to grow, these issues will become more pronounced. Information filters that control what version of the Internet a person sees are calibrated based on how much money various algorithms think you have. Which means distinct digital-advertising landscapes are increasingly drawn on socioeconomic lines.

The effect may be a more pleasant online experience for someone who is perceived to have more income. In the same way that startups have put a premium on cutting out human interaction for those who can afford it, adlessness can be a luxury for those who choose to buy ad blockers so their webpages load faster. But distinct ad landscapes aren’t just about seeing more elegant corporate messages, or encountering fewer pop-up ads—or even none at all. Companies and individuals are working together to target consumers on a personal level, to use their most vulnerable Google searches against them.

http://www.theatlantic.com/technology/archive/2015/11/google-searches-privacy-danger/413614/?utm_source=pocket&utm_medium=email&utm_campaign=pockethits

As the article suggests, this initiative may be the result of the threat posed by Apple music. What interests me is how totally open-ended this is: how do we perceive and evaluate risks when policies take such a form?

Sections 3.3 and 3.4 of Spotify’s privacy policy say that the app will now collected much more data about its users.

The section ‘Information Stored on Your Mobile Device’ says that: “With your permission, we may collect information stored on your mobile device, such as contacts, photos or media files.”

In the next section, it says: “We may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g Bluetooth).”

http://www.independent.co.uk/life-style/gadgets-and-tech/news/spotify-has-announced-a-creepy-new-privacy-policy-and-people-are-worried-10464129.html

Yesterday saw the news that ‘Infidelity site’ Ashley Madison had been hacked, with the attackers claiming 37 million records had been stolen. The site is an online forum for infidelity, a dating site explicitly designed to facilitate affairs, something which potentially provoked the ire of the hackers. Or it could be the fact that users are charged a fee of £15 to permanently delete their records from the site, the efficacy of which the hackers dispute. This seems to be indicative of a broader trend in which dating sites as a whole were found by the Electronic Freedom Foundation to have failed to implement basic security procedures and to be near uniformly vague or silent about whether user data was deleted after the closure of an account.

This is a specific instance of a much broader category of problem which I’ve been thinking a lot about recently: escaping the filter bubble. I use this concept in a much broader sense than Eli Pariser‘s original use in his (excellent) book. I see filter bubbles as being a matter of algorithmic enclosure but also of information security. In fact I would argue that the former inevitably poses questions for the latter, because filter bubbles rest upon the collection of personal information and intervention upon this basis. Filter bubbles always pose questions of information security because environments designed around them are always information-hungry and mechanisms of personalisation inevitably introduce opacity into interactions between users and a system in an asymmetric way. But I’d like to expand the concept of filter bubble to encompass the entire informational environment in which we find increasingly find ourselves deliberately enclosed through our use of digital technology. Not all of this is applied algorithmically but I would argue, somewhat crudely, we can talk about greater or lesser tracts of everyday life being lived via digital mediation in a filter bubble characterised by varying degrees of enclosure.

What interests me are experience where we don’t realise we’re in a filter bubble. The questions of information security don’t occur. We live with ontological security, sufficiently comfortable with this technology (something which personalisation can contribute to) in order to act ‘as-if’ the filter bubble doesn’t create risks for us. Will Davies offers an analogy which captures this effectively:

I have a memory from childhood, a happy memory — one of complete trust and comfort. It’s dark, and I’m kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I’m perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I’m completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.

Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.

http://thenewinquiry.com/essays/the-data-sublime/

But when this ontological security is punctured, we can see risks everywhere. What are people doing with our data? What could they be doing with our data? How are our online environments manipulating us? I’m interested in using ontological security as a conceptual frame through which to understand the urge to escape the filter bubble on a psychoanalytical level. As I develop this line of argument, I need to work on making the exact sense of the underlying concept clearer, but leaving that aside for now, I think it offers a really interesting frame for exploration. Here are the propositions I’m going to come back to in order to develop further:

  1. We are enmeshed within a filter bubble through our everyday use of digital technology
  2. The filter bubble is deliberately designed, indeed redesigned on a sometimes hour-to-hour basis, driven by complex and opaque interests
  3. Our orientation towards the filter bubble is extremely variable, even over time in one life, let alone between people

But for now what I’m interested in is how we escape the filter bubble. When we see the endemic risks, when the reassuring cocoon of ontological security recedes, what do we do? The problem is  that not everyone is equally well positioned to escape the filter bubble. It necessitates technical knowledge, time and energy. Some people don’t care but know what to do. Some people do care but don’t know what to do. Most people fall between these two poles at different points in relation to specific issues. What I’m interested in is how any definite attempt to escape the filter bubble leads to an intensification of cognitive burdens at a time of endemic acceleration. If everyone feels rushed, how does the urge to escape the filter bubble contribute to that experience, constituting just one more thing to worry about? How does this in turn contribute to the problem of what I’ve elsewhere described as cognitive triage? I can imagine an emerging profession, consultant digital escapologist, paid to help the cash-rich but time-poor manage their information security.

In this RSA talk the pioneering online campaigner Eli Pariser talks about a crucial and, as yet under-discussed, danger facing the the social media web: the expansion of filtering into every aspect of our online activity. Sites collect data on usage patterns, particularly our reactions to being presented with content and the action (e.g. ‘like’, ‘share’, ‘+1’)  we take in response to what we see. Without collecting such data any possibility of a semantic web is immediately foreclosed because human meaning has to enter the processing system somewhere. Yet the sheer opacity with which these technologies are being developed, let alone how they are being implemented on the web, demands urgent political debate.

However it would be easy to be alarmist about this and throw the baby out with the bath water. The problem is not filtering per se but rather the private and opaque nature of this filtering. In so far as the development and roll out of the technology is reliant on the corporate structures of capitalism, it’s difficult to avoid the former entirely. But the demand shouldn’t be for liberation from the filter bubble these corporations have placed us in – it should be for them to make their technology available to us so that we can design and implement our own filtering bubbles, as part of our ongoing day-to-day interactions with the internet, driven by our awareness of what we do and do not want to see. Certainly the computational systems they’ve developed allow us to see connections which we might not be consciously aware of: I’ve come across rafts of fascinating reading through following Amazon’s ‘other customers who bought this also bought’ system. But this should be an opt in system, rather than something imposed upon us. It could be argued that there are political problems inherent in this as well – as Cass Sunstein plausibly argues in his Republic 2.0 – given the possibility that already politically divided societies are likely to become ever more polarized when individuals self-select for all the content they encounter.

However firstly it’s necessary if we’re going to have any possibility of engaging productively and creatively with modern digital technology simply because of the exponential trend of content growth which goes hand-in-hand with the mass uptake of social media tools. Secondly, the problems attached to it are contingent and emergent (i.e. they result from when people in practice do this filtering badly, often for reasons not of their own making) rather than being intrinsic to filtering itself. Thirdly, the sheer cultural value of web 2.0 demands new proficiencies on the part of its users: we can either retreat from information overload (see the growing trend for going offline, protectively lock ourselves into virtual bubbles of our own making, stay passively within the corporate infosphere*  OR we can embrace the challenges that come from this revolution in human communication, using the tools available to us in order to dialogically develop a dynamic filtering orientation as we negotiate an ongoing path through human culture in the 21st century.

*Which I think is the main concern which arises from the filter bubble as it presently stands

Originally posted on Sociological Imagination