In an interesting chapter Frederic Vandenberghe explores the role of the individual in Bourdieu’s Sociology, as well as the critiques which Margaret Archer and Bernard Lahire make of it. His intention is to respond to a sociology he sees as hegemonic by developing a post-Bourdieusian theory of the social world that is not anti-Bourdieusian. His project, as I understand it, derives from a sense that Bourdieu’s sheer influence is distortive, polarising debate in a way that steers it away from concern with better or worse sociology to more or less accurate interpretations of the master.

How accurate is Vandenberghe’s account of Bourdieu’s influence? His 536,230 citations certainly offer quantitative evidence of this influence, but the claim that Bourdieu’s sociology is hegemonic seems more contentious to me. Nonetheless, he’s surely correct that the combination of its influence, diffusion and systematicity make it a force to be reckoned with. Or rather a force that must be reckoned with, a reference point that is difficult, if not impossible, to ignore.

Both Archer and Lahire were deeply influenced by Bourdieu. My interview with her in here explores his influence on her thinking, as well as her time working with him as a post-doc in the early 60s. While, as Vandenberghe puts it, Lahire’s sociology is so “thoroughly Bourdieusian that he could well be considered the heterodox successor to the master (Loïc Wacquant being the official one)”. Both have worked at the intersection of sociology and psychology in recent years, with Lahire taking inspiration from Durkheim while Archer has looked to American pragmatism for intellectual resources. Vandenberghe argues that their work represents a social psychology of a new kind: orientated to “how groups, large and small, behave within in the individual mind” rather than “how individuals behave in small groups”. Their shared unit of analysis is the life, understood biographically, as a movement through the world constituted through choices. But the dissimilarity arises because Archer’s focus concerns how future projects shape present actions, whereas Lahire explains the present and the future in terms of past “dispositions and their activation in particular contexts in the present”. As he puts it, “His actors are pushed by their dispositions, while hers are pulled forward by their projects”.

From Vandenberghe’s exposition, it seems that Lahire’s critique of the concept of habitus resembles Archer’s in some ways: he “accuses Bourdieu of abusively generalising a particular model that only holds in exceptional situation (such as traditional societies and total institutions)”. But he make the same critique of the concept of field, “accusing Bourdieu of transforming a regional model into a general theory of the social world”. Instead he offers an account of the individual as “like a crumpled sheet or a rumpled rag”, with social space in all its dimensions unevenly folded up inside of them. Not unlike Archer, he sees what Bourdieu regarded as a marginal condition (the cleavage of the habitus) to instead be a general characteristic, at least under certain social and cultural conditions.

His exposition of Archer is excellent, rather unsurprisingly as one of the theorists most deeply conversant with her body of work as a whole. The slight exception to this is the latent teleology he reads into the concept of reflexivity, ignoring the extent to which we all practice each of these modes to varying degrees in everyday life. Oddly, he offers precisely this recognition as a suggestion of how her account of reflexivity can be improved, with his accusation of a “kind of disguised personality test” being an incisive critique of how her work on reflexivity is chronically misread, even by its advocates.

I agree with him however that Archer downplays the role of cultural structures, seeing them as something which “structures the situation from outside, not from inside in the form of subconscious schemes of perception, judgement and interpretation that prestructure the world and canalize action, excluding some options even before the actor becomes conscious of the situation”. His suggestion that we investigate empirically how the relative balance of reflexivity and disposition operates in particular action situations is one I find extremely plausible, perhaps demanding that we need methods other than the interview, as well as overcoming the relative neglect of situated embodied action within Archer’s work.

It’s an interesting chapter which I highly recommend. It’s left me wanting to return to my PhD, as well as investigating Lahire in greater depth. It strikes me that I’ve actually done something akin to what Vandenberghe advocates, synthesising Archer and Lahire, without actually having read Lahire. My curiosity demands that I establish whether or not this is the case.

In our discussion of metrics systems, it’s easy to treat subjectivity as a cipher, regarding people as passively moulded by algorithms or blindly governed by the incentives that operate through the institutionalisation of the metrics. My objection to the former is not the claim that people are shaped by metrics, but rather the assumption that this process is basically passive. My interest is in how metrics come to matter to us. How are people shaped over time? How do their biographically accumulating dispositions and concerns influence the actions they take over time? How do these feed back into the metrics system and the organisations within which they are institutionalised?

The fictional portrayals that are starting to emerge of this – novels like Super Sad True Love Story, the Circle and Whiskey Tango Foxtrot, films like Nerve – often struggle to represent this engaged subjectivity because the imperatives of effective story telling militate against it. What we really need is a novel or film that explores metricisation through the internal monologue of what I imagine would turn out to be an unreliable narrator.

Notes for my talk for the Reflexivity Forum at Warwick on May 24th

What does it mean to be distracted? For the last year, I’ve been telling people that I’m working on a new project about digital distraction and everyone seems to immediately grasp what I mean by this. But conceptualising precisely what we should take ‘distraction’ to mean is slightly more complex than I realised at the outset of the project. The dictionary offers a good starting point, with two definitions:

  1. a thing that prevents someone from concentrating on something else.
  2. extreme agitation of the mind

Looking at these definitions, it’s easy to infer a causal relation between the phenomena they designate: we might assume that (1), if encountered to a sufficient degree under conducive circumstances, leads to (2) through sheer accumulation of distraction. In other words: lots of distractions lead to distractedness

In a recent piece of work, I tried to analyse the rise of (1) in terms of constant connectivity. Interruptions have always been part of human experience, in so far as that there are always contingencies which might emerge in order to disrupt an activity that’s in process. But the ‘triple revolution’ of mobile computing, wireless internet access and social networks have contributed to a proliferation of interruptions, as have the second order effects when this multiplication of communication channels lead to the qualitative and quantitative escalation of communication e.g. people trying multiple means to contact someone in the absence of governing norms about appropriateness, strategic communication that seek to shock and surprise in order to be heard above the din.

Analysed in this sense, talk of interruptions leads rather inevitably to the consideration of reflexivity. What does it mean to ‘prevent someone from concentrating on something else’? It means there was something else they were trying to do and the external event, which we label as a distraction (1), has interrupted their action towards this end. Distraction needs to be conceived of as relational: there is the distracting object, but it only has this power in relation to an existing activity undertaken under conditions that leave someone conducive to being distracted.

What we’re being distracted from might have been routine action, e.g. I get a phone call when making a cup of coffee, but the very act of interruption engenders an awareness of that from which we were interrupted. Consider a distraction (1) significant enough to completely disrupt our previous action: when we ask ourselves “now what was I doing before he phoned?”, this is an incitement to reflexivity, albeit one that reflects a prior failure thereof. So rather than seeing distraction (1) and reflexivity as antithetical, we have to recognise a more complex relationship between them. Distractions impede reflexivity but also highlight it. Persistent distractions engender reflexivity, when we recognise something as a ‘problem’ and begin to ask what it is we might do about it?

It’s for this reason that I don’t think we should consider distracted people as somehow a-reflexive people. Distracted people are those who live within a socio-technical environment sufficiently productive of distraction (1) that we might talk of them as being characterised by distraction (2): it’s an ‘agitation of the mind’, rather than an absence of reflexivity, a difficulty articulating and sustaining courses of action rather a lack of capacity to reach conclusions about what a desirable course might be. Distraction is something which operates on a number of levels simultaneously:

  1. A distracting environment renders time and space for reflexivity unlikely: the conditions for internal conversation are often not in place and where they are, they’re unlikely to last.
  2. A distracting environment supplies more stimuli about potential courses of action and potential projects: under these conditions, ‘bounding’ variety becomes increasingly difficult, rendering internal conversation more necessary than ever.
  3. A distracting environment militates against sustained trajectories of action, because interruptions to action become more likely (with the cognitive costs they entail) as do interruptions to reflexivity exercised about those actions.

Distracting environments are characterised by the proliferation of distractions but the causality of how this leads to distractedness is more complex than I initially realised & I’m still trying to clarify my views on this.

As you may know, executive coaching is an increasingly common phenomenon, particularly in some sectors like tech. This is how Eric Schmidt and his co-author describe the necessity of it in How Google Works loc 2440:

Whenever you watch a world-class athlete perform, you can be sure that there is a great coach behind her success. It’s not that the coach is better at playing the sport than the player, in fact that is almost never the case. But the coaches have a different skill: They can observe players in action and tell them how to be better. So why is it that in the business world coaches are so unusual? Are we all like Eric when he started at Google, so confident of ourselves that we can’t imagine someone helping us to be better? If so, this is a fallacy. 

As a business leader, you need a coach. The first ingredient of a successful coaching relationship is a student who is willing to listen and learn. Just like there are hard-to-coach athletes, there are hard-to-coach executives. But once they get past that initial reticence, they find there are always things to learn. Business coaches, like all coaches, are at heart teachers, and Bill Campbell, the best coach around, tells us he believes that management is a skill that is completely learnable.

This is something which suggests an obvious comparison to sports, not just in terms of the language used to describe this relationship. James Surowiecki, author of Wisdom of Crowds, draws out the connection in an interesting essay about the increasing competitive advantage accrued when performance is already at a top level:

The key part of the “performance revolution” in sports, then, is the story of how organizations, in a systematic way, set about making employees more effective and productive. This, as it happens, is something that other organizations started doing around the same timeline.

But can managerial performance really be measured in these terms? I don’t think it can and the belief to the contrary strikes me as a really interesting conceit, reflecting interestingly on the culture of managerialism: a kind of moral athleticism amongst prominent CEOs in which they aspire to be all that they can be

If we look at the same phenomenon further down the organisational ladder, we get to enforced performance reviews and sanctions ensuing from a failure to meet imposed expectations. We get to sleepless night and diffuse anxiety saturating into everyday life, all generated by concerns over ‘performance’. Coaching still exists but it becomes a very different phenomenon, as this interview I did about the sociology of work-life coaching suggests:

Coaching usually consists of individual or group meetings that continue for a few months. In the beginning of these meetings, a goal is set for the whole coaching process, and then the process continues with for example personality tests or exercises that the clients do in order to achieve the set goal. The coaches that I interviewed were often a bit vague in their answers when I asked about the specific practices of coaching. They would rather talk about ‘realising the inner potential of the individual’, though what this means specifically is rather unclear.

In general, it seems that coaching is for most part about discussing one’s hopes and realities with the coach and getting feedback for both the exercises and tests and for the plans that one has and the actions that one takes. The focus on ‘potential’ is telling of how coaching is quite oriented towards the future but at the same time relies on something that is thought to already exist within the self. As it happens, coaching concentrates on the individual. This means that all the work that is done in coaching centers on changing oneself in order to achieve the goals that one wants to achieve. 

This is reflected in the practices of coaching in the sense that they demand self-reflexivity and focus on getting to know oneself and reflecting for instance on one’s personality with the help of tests and exercises. In terms of employment, this means that questions that concern wider social structures or even organisational structures are left outside the scope of the things one needs to change. It thus begins to seem that change always starts within the individual self – and also that if there is a need for change it is the self that is at fault. In the case of unemployment then, for example, the structural reasons for unemployment are not accounted for but rather it is thought that if the individual just works hard enough to change themselves then they will also find employment – and if one is unemployed it just means that one has not yet found the ‘true self’ and the right goals that would solve the problem. In other words, if one does not find work, it is implied that this just means that one has not worked hard enough on improving oneself.

As a relational technology of the self, work coaching has to be read against the background of metricisation. It naturalises metrics and their attendant apparatus of control, scrutiny and intervention. The issue becomes a narrow one of ‘performance’ rather than one’s place over time within an organisation.

I’ve nonetheless become a bit obsessed with Bill Campbell. He turns up time and time again in business books about Silicon Valley. It also turns out he was actually a football coach originally:

 Son of a local school official, Campbell was born and raised in Homestead, Pennsylvania, near Pittsburgh. He attended Columbia University where he played football under coach Buff Donelli from 1959 to 1961. In his senior year, he was named to the All-Ivy Team. He graduated in 1962 with a bachelor’s degree in economics. In 1964, he obtained a master’s degree in education from Teachers College, Columbia University.[2] He was head coach of Columbia’s football team, the Columbia Lions from 1974 to 1979. Prior to this he was an assistant at Boston College for six years. He met his first wife, the former Roberta Spagnola, while she was the assistant dean in charge of Columbia’s undergraduate dormitories.

He joined J. Walter Thompson, the advertising agency, then Kodak where he rose to run Kodak’s European film business. Hired by John Sculley he became Apple’s VP of Marketing, then ran Apple’s Claris software division. When Sculley refused to spin Claris off into an independent company, Campbell and much of the Claris leadership left. Since 1997, when Steve Jobs returned to Apple, Campbell has served as a corporate director on Apple’s board of directors.

Campbell became CEO of GO Corporation, a startup pioneering a tablet computer operating system. After successfully selling GO Eo to AT&T Corporation in 1993, Campbell was CEO of Intuit from 1994 to 1998. Campbell announced that he would be retiring as the Chairman of the Board of Directors at Intuit starting January 2016.[3]

Campbell is an adviser to a number of technology companies, and was elected Chairman of the Board of Trustees at Columbia in 2005.

According to CNN Money, he is worth $200 million.[4]

To what extent is it a marker of prestige to be coached by Campbell? Is it still a status symbol for lesser executives to be coached by lesser coaches? Do these celebrity coaches and celebrity clients underwrite the demand elsewhere? Do all these coaches have top level business experience?

This idea from Daniel Little really chimes with what I’m arguing in my chapter for the 5th CSO book. Life planning as blueprint is becoming ever less sustainable as the continuity of a subject’s context becomes ever less assured. This disrupts instrumental rationality because contextual assumptions about means become unreliable, while social and cultural change also throws up new opportunities which invite us to reconsider our ends:

We might think of life planning as being less like a blueprint for action and more like a navigational guide. We might think of the problem of making intermediate life choices as being guided by a compass rather than a detailed plan — the idea that we do good work on living if we guide our actions by a set of directional signals rather than a detailed map. Life outcomes result from following a compass, not moving towards a specific GPS point on a map.

There is an analogy with business planning here. Consider the actions and plans of a CEO of a company. His or her choices in concrete decision moments are guided by several important considerations: remain profitable; prepare the ground today for viable business activity tomorrow; create an environment of trust and respect among the employees of the company; make sure that company choices also take the wellbeing of the community into account; treat employees fairly; anticipate changes in the marketplace that might dictate change in process or product within the company. But there is no certainty, no fixed prescription for success, and no algorithm for balancing the goods that the firm’s leadership pursues. The successful firm will have built its success over a long series of decisions oriented towards the fundamental values of the business.

What I’m interested in is how it remains possible to shape a life under these conditions. One response is to embrace shapelessness. The other is to temporise, dividing planning up into manageable chunks which facilitate instrumental rationality no longer sustainable over the life course as a whole. But the one that seems most sustainable is what Daniel Little details here as life planning as navigational guide.

How companies institutionalise certain forms of (quantifiable) reflexivity. From Marissa Mayer and the Fight to Save Yahoo! pg 10:

Starting in 1999, Google management used a system called Objectives and Key Results, or OKRs, to measure the effectiveness of its employees, divisions, and the company overall. The idea for OKRs came from Google investor John Doerr, the famous venture capitalist. Doerr got it from Andy Grove, who developed a similar system called Management by Objective during his successful run at Intel. In the OKR system, every Google employee would come up with a list of quantifiable goals every quarter. The employee would present this list to a manager for sign-off, then the approved goals would be entered into Google’s internal network, where everyone in the entire company could see them. The next quarter, the employee would meet with the manager again, review their performance, and get a score on their OKRs. That score would determine the employee’s bonus payment and ability to get a raise, a transfer, or promotion within the company. Starting in September 2012, Mayer introduced a clone of OKRs to Yahoo. She called them Quarterly Performance Reviews, or QPRs. Employees from Mayer’s direct reports on down would get a score every quarter, from one to five. A one meant the employee consistently “misses” goals, a two meant the employee “occasionally misses,” a three, “achieves,” a four, “exceeds,” and a five, “greatly exceeds.”

In this case, it was used to support a ‘rank and yank’ system. Making it slightly more palatable by ranking employees in terms of goals they’ve formulated themselves. From pg 10-11:

In effect, a target distribution meant Mayer wanted managers to put a certain percentage of the employees they managed in each of the five buckets. Ten percent would go into “greatly exceeds,” 25 percent into “exceeds,” 50 percent into “achieves,” 10 percent into “occasionally misses,” and 5 percent into “misses.” Then Mayer rolled out new policies wherein employee eligibility for bonuses, promotions, and transfers within the company would be based on their average score for the past three quarters. Employees with low enough scores would be asked to leave the company.

As apparently happened at Microsoft as well (an interesting case study) this brought employees into direct competition with each other. What interests me here is the disjuncture between the supposedly transparent standards employees are subject to and the utterly opaque consequences of the grading curve. Someone has to fail. So how do you know if you’ve done enough? Meeting your goals isn’t enough to be safe. You have to try and ensure you surpass your peers in everything you do. This doesn’t necessarily lead to the acceleration of work but it does lead to its intensification

At an event in Liverpool last week, I was asked by Steve Fuller about what I understood responsibility to mean in a sociological sense. He was sceptical that I could support claims of responsibility given my understanding of human agency as situationally performative but biographically continuous. In essence I understood him to be asking: do I think there’s something about the human being in relation to which responsibility can be assigned? This is a question I’d never really thought about explicitly, though once I began to I’ve realised that it actually knits together the full range of my interests.

Part of my difficulty with the question is that I think ‘responsibility’ encompasses a number of different things which we need to unpack:

  1. Responsibility as moral agency: how an individual comes, through internal and/or external conversation, to assume a stance of responsibility towards their own actions. To me it seems obvious that this is a matter of what Charles Taylor calls disengaged agency. It’s a mode of engagement with the world that usually involves stepping back from social encounters in order to reflect on one’s own actions within them, though I do believe sometimes we confront these questions when in the flow of the social situation.
  2. Responsibility as interpersonal ascription: how an individual comes, through social interaction, to be held accountable for their actions. This can, but by no means necessarily does, lead to the first sense of responsibility as moral agency. This is about social judgement, holding someone to account in terms of putatively shared standards in relation to which their behaviour can be evaluated.
  3. Responsibility as structural enforcement: how an individual comes to be formally held responsible for their actions, in relation to codified rules and regulations which are sufficiently durable to be both enforceable and recognised as binding. Legal systems are the obvious example of this but I’d include disciplinary proceedings within workplaces within this category as well. The point is the process is formalised and the rules are codified. It’s not tied to the social situation, a term I use in Goffman’s sense, in the same way as the earlier forms of responsibility.

These are interconnected in complex ways. But by analytically distinguishing between them, we’re able to recognise how they can vary independently. Under contemporary social conditions, I would argue that we have seen the following changes:

  1. People are more likely to over-actively exercise moral agency, often to the point of blaming themselves for personal outcomes that are systemically produced. This individualisation contributes to the fragmentation of normative consensus, as individual reasoning acts as a vector of deviance amplification: the more intensively people think about these things, through the filter provided by their own particularity, the less likely they are to straight forwardly reproduce ‘common sense’.
  2. The interpersonal ascription of responsibility is becoming more contentious because of this fragmentation of normative consensus. If we can’t take ‘common sense’ for granted, interventions of this sort will tend to be experienced as arbitrary impositions of power. This leave them experienced as something inherently contentious, which I’ve written about as the ‘paradox of incivility’: when consensus breaks down, attempts to enforce civility are actually experienced as rude and aggressive.
  3. ‘Common sense’ supplies the intuitions upon which enforcement is grounded. In its absence, normativity comes to seem less binding, incentivising alternative penalty-based enforcement that doesn’t attempt to seek grounding in moral agency. Margaret Archer describes this as ‘anormative regulation’ in an upcoming paper.

Having only recently grasped quite how interesting case law is, thanks to the conversation with Steve and Joseph, I’d now like to start to refine the outline I’ve sketched above and apply it to thinking through the challenges posed by emerging technologies.

What makes human beings distinctive amongst animals? This is an argument I found myself having a few times last week. I just came across a great passage by Martha Nussbaum, quoted on Brain Pickings, reflecting my own views on this. When I say ‘reflexivity is a defining characteristic of the human’, it’s a short hand for this broader proposition, expressed in this case more elegantly than I am able to:

Human beings appear to be the only mortal finite beings who wish to transcend their finitude. Thus they are the only emotional beings who wish not to be emotional, who wish to withhold these acknowledgments of neediness and to design for themselves a life in which these acknowledgments have no place. This means that they frequently learn to reject their own vulnerability and to suppress awareness of the attachments that entail it. We might also say … that they are the only animals for whom neediness is a source of shame, and who take pride in themselves to the extent to which they have allegedly gotten clear of vulnerability.

Following from our successful workshop earlier this year, we’re organising the first of what will hopefully become a regular reflexivity forum at the University of Warwick on May 24th. The intention is to provide a space in which people conducting empirical research into human reflexivity will be able to present work in progress, discuss issues they’ve encountered and meet others working on similar issues.

If you’d like to attend could you let me know as as you can, as numbers will be limited for the event. If you’d like to present work in progress, please could you send a title and 100 word abstract. Hopefully we’ll have at least 20 minutes per speaker but this depends on the numbers who are keen to speak.

One of the most interesting aspects of Margaret Archer’s work on reflexivity is her interest in how people sometimes seek to ‘blot out’ their experience. Her overarching concern is with the variability of reflexivity, something which I think is hugely important against an intellectual background in which most   thinkers impute a uniform deliberative capacity to subjects, assuming they do recognise this capacity at all. 

For instance while some people, at least some of the time, reason autonomously in a confident and detached way, addressing the question “what should I do?” quickly and effectively, others find that attempts to deliberate intensify affect rather than provide answers. These are perhaps the people most likely to seek to ‘blot out’ experience, to evade reflexivity through deliberate distraction. But I’ve argued elsewhere that the competitive busyness of the self-striving utility maximiser can represent a comparable form of ‘blotting out’, avoiding difficult questions of what really matters to them by throwing themselves into the events of the day.

Before we have a fully developed sociology of reflexivity, we need a sociology of ‘blotting out’ experience: a systematic understanding of the different ways in which people can seek to evade reflexivity and why they might pursue them. I was thinking about this today after encountering the description of the ‘machine zone’ at the start of the stunning book Addiction By Design, by Natasha Dow Schüll, pg 2:

Mollie recounts how her play began, and how it escalated. It started soon after she moved to Las Vegas with her third husband in the 1980s, when he taught her to play video poker on a miniature, handheld machine. “I became hooked on that amazing little machine. And then I graduated to the real thing.” Short stints at video poker on weekend visits to casinos turned into sessions of hours and then days. Her financial expenditure grew in step with her play, to a point where she was spending entire paychecks over two- day binges at machines. “I even cashed in my life insurance for more money to play,” she tells me. When I ask Mollie if she is hoping for a big win, she gives a short laugh and a dismissive wave of her hand. “In the beginning there was excitement about winning,” she says, “but the more I gambled, the wiser I got about my chances. Wiser, but also weaker, less able to stop. Today when I win— and I do win, from time to time— I just put it back in the machines. The thing people never understand is that I’m not playing to win .” Why, then, does she play? “To keep playing— to stay in that machine zone where nothing else matters.” I ask Mollie to describe the machine zone. She looks out the window at the colorful movement of lights, her fingers playing on the tabletop between us. “It’s like being in the eye of a storm, is how I’d describe it. Your vision is clear on the machine in front of you but the whole world is spinning around you, and you can’t really hear anything. You aren’t really there— you’re with the machine and that’s all you’re with.”

When placed in this context, we can see how a concern with the experience of ‘blotting out’ takes us beyond psychology by placing this evasion, in which people seek the embrace of a zone in ‘which nothing else matters’, within the broader development of digital capitalism and the declining capacity of non-elite collective agency to shape long-term political and economic trends.

Another startlingly illuminating point in Retrieving Realism by Dreyfus and Taylor. At loc 665, they observe how Heidegger’s early work “undercuts another basic feature of the classical picture: that the primary input is neutral, and is only at a later stage attributed some meaning by the agent.” This is a familiar point but I’ve never encountered it stated so lucidly before. It has important connotations for how we conceive of digital distraction. Broadly, we could take two paths:

  1. Digital abundance presents agents with an overwhelming quantity of potentially relevant information to which they must attribute meaning, or forgo this with potential consequences 
  2. Digital abundance presents agents with an overwhelming quantity of potentially relevant information, which is already meaningful due to the relations of complementarity and contradiction which obtain between this novelty and already encountered variety (or forgo this with potential consequences)

The first view sees digital distraction as an information processing challenging. The second view sees digital distraction as an existential challenge. This has important implications for how we make sense of it sociologically.

Yesterday saw the news that ‘Infidelity site’ Ashley Madison had been hacked, with the attackers claiming 37 million records had been stolen. The site is an online forum for infidelity, a dating site explicitly designed to facilitate affairs, something which potentially provoked the ire of the hackers. Or it could be the fact that users are charged a fee of £15 to permanently delete their records from the site, the efficacy of which the hackers dispute. This seems to be indicative of a broader trend in which dating sites as a whole were found by the Electronic Freedom Foundation to have failed to implement basic security procedures and to be near uniformly vague or silent about whether user data was deleted after the closure of an account.

This is a specific instance of a much broader category of problem which I’ve been thinking a lot about recently: escaping the filter bubble. I use this concept in a much broader sense than Eli Pariser‘s original use in his (excellent) book. I see filter bubbles as being a matter of algorithmic enclosure but also of information security. In fact I would argue that the former inevitably poses questions for the latter, because filter bubbles rest upon the collection of personal information and intervention upon this basis. Filter bubbles always pose questions of information security because environments designed around them are always information-hungry and mechanisms of personalisation inevitably introduce opacity into interactions between users and a system in an asymmetric way. But I’d like to expand the concept of filter bubble to encompass the entire informational environment in which we find increasingly find ourselves deliberately enclosed through our use of digital technology. Not all of this is applied algorithmically but I would argue, somewhat crudely, we can talk about greater or lesser tracts of everyday life being lived via digital mediation in a filter bubble characterised by varying degrees of enclosure.

What interests me are experience where we don’t realise we’re in a filter bubble. The questions of information security don’t occur. We live with ontological security, sufficiently comfortable with this technology (something which personalisation can contribute to) in order to act ‘as-if’ the filter bubble doesn’t create risks for us. Will Davies offers an analogy which captures this effectively:

I have a memory from childhood, a happy memory — one of complete trust and comfort. It’s dark, and I’m kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I’m perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I’m completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.

Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.

But when this ontological security is punctured, we can see risks everywhere. What are people doing with our data? What could they be doing with our data? How are our online environments manipulating us? I’m interested in using ontological security as a conceptual frame through which to understand the urge to escape the filter bubble on a psychoanalytical level. As I develop this line of argument, I need to work on making the exact sense of the underlying concept clearer, but leaving that aside for now, I think it offers a really interesting frame for exploration. Here are the propositions I’m going to come back to in order to develop further:

  1. We are enmeshed within a filter bubble through our everyday use of digital technology
  2. The filter bubble is deliberately designed, indeed redesigned on a sometimes hour-to-hour basis, driven by complex and opaque interests
  3. Our orientation towards the filter bubble is extremely variable, even over time in one life, let alone between people

But for now what I’m interested in is how we escape the filter bubble. When we see the endemic risks, when the reassuring cocoon of ontological security recedes, what do we do? The problem is  that not everyone is equally well positioned to escape the filter bubble. It necessitates technical knowledge, time and energy. Some people don’t care but know what to do. Some people do care but don’t know what to do. Most people fall between these two poles at different points in relation to specific issues. What I’m interested in is how any definite attempt to escape the filter bubble leads to an intensification of cognitive burdens at a time of endemic acceleration. If everyone feels rushed, how does the urge to escape the filter bubble contribute to that experience, constituting just one more thing to worry about? How does this in turn contribute to the problem of what I’ve elsewhere described as cognitive triage? I can imagine an emerging profession, consultant digital escapologist, paid to help the cash-rich but time-poor manage their information security.

I went to see an excellent exhibition about children’s television yesterday afternoon, intended to explore “how the magical programmes of our childhood have created memories and nostalgia in adults and children alike”. The possibility of such explanation presupposes some degree of collectivity. The exhibition was ambiguous at points but there was a clear undercurrent of ‘our memories’ informing the curation, something which was particularly pronounced for me given I was seeing it with a Polish friend for whom many of these were not her memories. It would be possible to interpret such shared cultural reference points as collective memories or collective horizons, albeit ones delineable by age cohort, but it occurred to me that we could more usefully read this in terms of temporality and routine. This snippet stood out to me yesterday and I’ve been thinking about it since:


This degree of synchronisation, perhaps itself dependent upon a long discarded Reithian vision, seems jarringly anachronistic in retrospect. But it’s also much easier to recognise it empirically than is the case for contemporary routines of cultural consumption and the aggregate patterns they produce in wider social life. I’m not just talking about ‘our’ having watched the same television at the same time, but rather that ‘same time’ as being embedded in a broader cluster of routines which constitute the texture of everyday life.

To be clear: I’m not suggesting that the existence of the “toddler’s truce” necessarily brought about its supposed effect. Nor that cultural consumption, or its highly synchronised and normatively charged absence from 6pm to 7pm, necessarily anchored routines or was even a particularly significant component of them. But it’s left me thinking about how temporal structures can be used to understand cultural memory, as well as how these are both facilitated by but also work to encourage clusters of everyday routines which will, at least as a whole, contribute over time to the constitution of the people who are leading them.

A couple of years ago I purchased a Nike Fuel Band, partly out of a curiosity driven by my nascent interest in self-tracking and partly out of a desire to rationalise not going to the gym. If I was planning to conduct research on self-tracking practices then it seemed important to me to actually try them myself. However over the following years, my interest in self-tracking became downgraded to that of something like urbanism, as a topic that fascinates me but that I realise I have nothing useful to say on, while my engagement (entrapment?) in self-tracking practices remained, first through the fuel band then two successive jawbone bands.

I’ve been given cause to reflect on this recently by the fact that my jawbone has broken twice in the space of a week (ouch) giving me a respite from the metricised tyranny to which I had merrily subjected myself over the previous years. In defence of the jawbone: the soft wakeup function can be an extremely pleasant way to wake up. It starts buzzing up to half an hour before a set time when it detects, albeit by way of questionable proxies, you are sleeping most lightly, with the intention of reducing drowsiness. I think there’s something to this but there’s also an obvious invitation to confirmation bias: if you set a device to wake you up without feeling drowsy then you’re much more likely to ask yourself ‘am I feeling drowsy?’ when you wake up and attribute its absence to the magical powers of the band. The sleep tracker was also the first and only experience I’ve had of ‘self-knowledge through numbers’. It turns out I had a persistent habit of going to bed very early when I was sleep deprived then it taking hours for me to get to sleep i.e. it would usually take me 10 minutes to get to sleep if I went to bed 10pm-12pm but hours if I went to bed earlier. Thus undermining the point of going to bed early. I also saw for the first time how much alcohol would undermine the quality of my sleep, prompting a year long experiment with cutting back on and then completely giving up alcohol, which I’m now in turn giving up on (I missed red wine & craft beer) but that was nonetheless enormously healthy for me as a person.

Now those defensive remarks are out of the way: the jawbone is fucking creepy. I’ve written about the idleness alarm and how readily the concept would lend itself to invasive applications. But I’m wondering now about how systematically the measurements have tended to crowd out the value of what is being measured within my own psyche. My standard defence of self-tracking had been that voluntary self-tracking is an augmentation of reflexivity: if you reflexively decide that exercise is good and you want to incorporate more exercise into your life, these technological practices can be useful tools to overcome some of the all-too-human propensities which undermine the projects of self-cultivation we seek to pursue. Furthermore, critics of self-tracking often mistake the narrative of self-tracking (self-knowledge through numbers) for its moral psychology, something which I think is empirically variable but I suspect has far more in common with neo-ascetic regimes like ‘lifestyle minimalism’ and ‘life hacking’ than these critics tend to recognise. The practices, the devices, the contexts and the sensibilities upon which the diffusion of ‘self-tracking’ depends may all be new. But this self-self relation simply isn’t and anyone who fails to recognise this has a poor grasp of ‘the self’, its history or both.

Nonetheless, what I’m now recognising is how what can be reflexively taken up as an extension of one’s agency – in order to increase our capacity to act on 2nd order desires (“I want to want to exercise”) in the face of 1st order whims (“I don’t want to go to the gym today”) – nonetheless acts back iteratively upon the agent in a way that moulds their dispositions towards reflexivity. What do I mean? Firstly, self-tracking practices are outcome orientated. What matters is a completed activity. This doesn’t magically remove your capacity to enjoy an activity but it does mitigate against it: if you’re going for a walk because your jawbone tells you to, it’s not impossible that you’ll nonetheless enjoy the walk, the scenery, being outside etc but the mentality of self-tracking never encourages and sometimes actively undermines the attentiveness necessary for this enjoyment to emerge during the walk. Secondly, this mattering is unstable unless the completed activity is measured in a reliable way: the whole edifice starts to crack if you begin to think about how the instruments may be deliberately or accidentally gamed, as well as the spheres of errancy (e.g. sleep vs. lying perfectly still unable to sleep) that become obvious once you’ve used a band for a bit. That this activity matters to you necessitates continued faith, perhaps ontological security in the sense of a willingness to act ‘as if’ the measurement is as objective as it says it is, in the instruments and your use of them. Thirdly, this mattering is contingent upon continued submission to the system. If your band breaks or you cease using it, perhaps switching to a competitor, the meaningfulness of what you’ve been doing is imperilled in proportion to the scale of the technological transition.

This is all a long winded way of saying that I’ve changed my views on self-tracking. I do find it creepy after all. But I still think many of the critics misunderstand exactly what’s going on here. I think cessation of self-tracking is an enormously important empirical topic, without which discussion of self-tracking will inevitably remain prone to over-generalisation. We also need longitudinal qualitative studies of self-tracking, serious and extended versions of the auto-ethnographic reflections I’ve tried to outline here, in order to better understand how these activities unfold temporally in a way able to change both the person and the activity.

I just cut this from my chapter for the upcoming CSO book. I don’t think it’s very good but I’m still trying to develop the underlying point so any thoughts are much appreciated:

To talk of ‘interruption events’ not be construed as a narrow issue of decreased performance, such that this putative fracturing of focus amounts to a generalised tendency towards decreasing efficiency in practical activity. The psychological literature on task interruption is inconsistent to this end, finding evidence both for interruptions leading to an increase in the time taken to perform a task but also at times to a decrease which has been attributed to the increase arousal that variably results from interruption or to strategic responses to multiple interruptions (Altmann and Trafton 2007: 1079). In fact this latter point is crucial for an analysis that foregrounds reflexivity: the awareness subjects have of their environmental propensity for interruption and its implications for what matters to them can and does feed into reflexive calibration of that environment or to more extended projects seeking to transform self and/or circumstances (not all of which will manifest empirically in terms of the limited variables which are the focus in the cognitive psychology of attention control). The concern of the present paper is not with the duration of interruptions or with the resumption lag (the time between the interruption and the first subsequent task related action) but rather with the social production of those interruption which these rather fine-grained instruments seek to measure the cognitive consequences of in an experimental setting[1]. This entails generalizing about categories of interruption events (e.g. receiving a notification of an e-mail via a smart phone) as tending to be of certain durations and involving a likely range within which the resumption lag falls. The ensuing cognitive lag, encompassing reacquaintance with the task at hand and/or one’s place within it (Ratwani and Trafton 2008: 679), will be individually trivial but might nonetheless be aggregatively significant if such events occupy increasing portions of a subject’s waking life.

However the exercise of reflexivity is an activity unlike many others. Whereas resumption of practical activities often depends upon environmental cues, for instance the arrangements of material within a workspace, these are absent for the resumption of reflexivity. Furthermore, reflexivity may not be categorized as a task by subjects (represented at best in terms such as ‘clearing my head’ or ‘working out what to do about X’) entailing a different dynamic of resumption lag to other forms of practical activity. For instance Wilson et al (2014) sought to better understand time use survey data recording that 95% of American adults had undertaken at least leisure activity in the past 24 hours but 83% reported having spent no time at all ‘thinking or relaxing’ While we should not conflate the exercise of reflexivity with deliberately pursuing its exercise as an activity (conceptualizing it as such and making time available for it) this finding is suggestive of a tendency for sustained time for reflection as being something which is neither pursued nor in many cases experienced as something generally possible.

Such interruption events are clearly not new. In fact the possibility of an interruption in this sense is intrinsic to the faculty of reflexivity: interruption events begin to be possible once there is something to be interrupted. The specific claim being made here concerns the escalation of such events and the implications thereof for personal reflexivity. Both the former and the latter are variable. Different social contexts (particularly, though not exclusively, in relation to their socio-technical dimensions) involve different propensities towards interruption with differential implications for modes of personal reflexivity. The present paper focuses upon digital devices and their attendant socio-technical infrastructure as encountered in everyday life (conceived schematically in terms of a transition from web 1.0 to web 2.0 and an, as yet incomplete, web 3.0) but this is for the sake of brevity and the notion of the interruption events should not be restricted to digital technology. The proposal being made is that any history of reflexivity implies a corresponding history of interruption events, with our understanding of the former being enriched by a greater understanding of the latter. Many resolutely non-digital factors would enter into a history of interruption events and these have in fact been studied in depth, just not under the rubric being proposed here: for instance historical work on the pervasive lack of solitude prior to the 18th century[2] (Taylor 1989, p. 291) and Simmel’s reflections on the “essentially intellectualistic character of the mental life of the metropolis” that emerges from the preponderance of novelty within the urban environment (Simmel 1905).

In fact Simmel’s famous essay illustrates the important point that will be developed later in this section: the relationship between interruption events and reflexivity should not be construed in a linear away, such that people in general are seen to become decreasingly unable to sustain deliberations with the multiplication of interruption events. For Simmel the urban environment, particularly the “the calculating exactness of practical life” in which the “money economy” has “filled the daily life of so many people with weighing, calculating, enumerating and the reduction of qualitative values to quantitative terms”, contributes to the intensification of reflexivity in one dimension (instrumentality), while tending towards its diminution in another (normative evaluation) as studied indifference serves to protect against the “disturbances and inner upheavals” which would otherwise be provoked by the “the shifts and contradictions in events” that characterise life in the metropolis (Simmel 1905, p. 12). So while the focus of this paper is necessarily limited for practical purposes, the scope of the proposed concepts is not: interruption events are produced with varying degree of frequency in different socio-technical environments and exercise a tendential power to curtail the scope of reflexivity by serving to interrupt extended internal conversation. Any particular interruption event is trivial, potentially serving to interrupt a single extended deliberation; it is rather their (patterned) multiplication over time which exercises a conditioning influence upon personal reflexivity[3]. The notion of a socio-technical environment invoked here is intended to reflect the involuntariness with which technology impacts upon our lives: changes in our environment occur because of other people’s uses of technologies and the behaviours facilitated by them, even if a given individual is not engaged in this way (Weller 2012, p. 126). The propensity of a socio-technical environment to multiply interruption events is in this sense non-voluntary: one can seek to evade the interruptions, incurring varying degrees of costs through doing so, but the tendency for interruption events to increase is a function of the distribution of technology throughout the environment and its embedding within social processes.

[1] An important question concerns the differences between interruptions in an experimental setting and those in everyday life. For instance, it seems plausible that lay normativity plays a role in intensifying or diminishing the cognitive costs of interruption events (e.g. the extent to which the task matters to the subjects concerned) in a way almost entirely excluded by the games constructed for purposes of experimental work.

[2] The widespread practice of the ‘second sleep’, leaving a period of wakefulness between two stretches of sleep, could be seen as a response to this pervasive lack of solitude, with Williams (2005) reporting that “Some, apparently, lay quietly and simply reflected on events of the preceding day or contemplated the day to come” or used the opportunity for prayer. The point can be overstated though, with others using the time for conversation or ‘intimate relations’.

[3] Often, as will be discussed, reflexively provoking responses to these recurrent interruptions to reflexivity.

A few years ago I did an interview with Andrew Sayer about his book Why Things Matter To People. It’s one of my favourite books but the podcast got lost twice amidst transitions from one computer to another, as well as forgotten about for a long period of time midway through my PhD. I’m pleased I’ve managed to recover it and it can now go live: