the cognitive costs of escaping the filter bubble

Yesterday saw the news that ‘Infidelity site’ Ashley Madison had been hacked, with the attackers claiming 37 million records had been stolen. The site is an online forum for infidelity, a dating site explicitly designed to facilitate affairs, something which potentially provoked the ire of the hackers. Or it could be the fact that users are charged a fee of £15 to permanently delete their records from the site, the efficacy of which the hackers dispute. This seems to be indicative of a broader trend in which dating sites as a whole were found by the Electronic Freedom Foundation to have failed to implement basic security procedures and to be near uniformly vague or silent about whether user data was deleted after the closure of an account.

This is a specific instance of a much broader category of problem which I’ve been thinking a lot about recently: escaping the filter bubble. I use this concept in a much broader sense than Eli Pariser‘s original use in his (excellent) book. I see filter bubbles as being a matter of algorithmic enclosure but also of information security. In fact I would argue that the former inevitably poses questions for the latter, because filter bubbles rest upon the collection of personal information and intervention upon this basis. Filter bubbles always pose questions of information security because environments designed around them are always information-hungry and mechanisms of personalisation inevitably introduce opacity into interactions between users and a system in an asymmetric way. But I’d like to expand the concept of filter bubble to encompass the entire informational environment in which we find increasingly find ourselves deliberately enclosed through our use of digital technology. Not all of this is applied algorithmically but I would argue, somewhat crudely, we can talk about greater or lesser tracts of everyday life being lived via digital mediation in a filter bubble characterised by varying degrees of enclosure.

What interests me are experience where we don’t realise we’re in a filter bubble. The questions of information security don’t occur. We live with ontological security, sufficiently comfortable with this technology (something which personalisation can contribute to) in order to act ‘as-if’ the filter bubble doesn’t create risks for us. Will Davies offers an analogy which captures this effectively:

I have a memory from childhood, a happy memory — one of complete trust and comfort. It’s dark, and I’m kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I’m perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I’m completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.

Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.

http://thenewinquiry.com/essays/the-data-sublime/

But when this ontological security is punctured, we can see risks everywhere. What are people doing with our data? What could they be doing with our data? How are our online environments manipulating us? I’m interested in using ontological security as a conceptual frame through which to understand the urge to escape the filter bubble on a psychoanalytical level. As I develop this line of argument, I need to work on making the exact sense of the underlying concept clearer, but leaving that aside for now, I think it offers a really interesting frame for exploration. Here are the propositions I’m going to come back to in order to develop further:

  1. We are enmeshed within a filter bubble through our everyday use of digital technology
  2. The filter bubble is deliberately designed, indeed redesigned on a sometimes hour-to-hour basis, driven by complex and opaque interests
  3. Our orientation towards the filter bubble is extremely variable, even over time in one life, let alone between people

But for now what I’m interested in is how we escape the filter bubble. When we see the endemic risks, when the reassuring cocoon of ontological security recedes, what do we do? The problem is  that not everyone is equally well positioned to escape the filter bubble. It necessitates technical knowledge, time and energy. Some people don’t care but know what to do. Some people do care but don’t know what to do. Most people fall between these two poles at different points in relation to specific issues. What I’m interested in is how any definite attempt to escape the filter bubble leads to an intensification of cognitive burdens at a time of endemic acceleration. If everyone feels rushed, how does the urge to escape the filter bubble contribute to that experience, constituting just one more thing to worry about? How does this in turn contribute to the problem of what I’ve elsewhere described as cognitive triage? I can imagine an emerging profession, consultant digital escapologist, paid to help the cash-rich but time-poor manage their information security.