This expression used by Alain de Botton in his How Proust Can Change Your Life (pg 42) stood out to me. He uses it in relation to the morning news, reflecting on how reporting inevitably strips away from the reality of what is reported on. This is an example of a broader tendency for human experience to “be stripped of the more obvious signposts by which we guide ourselves when ascribing importance”. To use the language of Andrew Sayer, factual reporting strips away what matters to people about what is being reported on. The distillation involved in reporting on the facts of a case unavoidably subtract how those state of affairs move people and motivate them, leaving us with an arid picture susceptible to wide circulation when so many other accounts compete for our attention. The abbreviation of human experience is a practical necessity which detracts from our understanding of others and the world around us, even as it contributes to our knowledge of those conditions.

It might be argued that social media highlights human experience in a new way, though I would suggest it is demotic in the sense of reality television rather than democratic in the sense of participatory. It foregrounds human experience through templates and incentivised interaction, increasing the flow of human experience in public consciousness but at the cost of its integrity. Abbreviation is intensified rather than attenuated, with so many shards of experience flying around that radically truncating our attention is the only way to cope. What gets through is what is spectacular, jarring or enraging. It is not a return to human experience but its last gasp, with meaning and mattering mangled by the machinery of abbreviation. Under these conditions, what de Botton calls the finger placing ability becomes important:

The value of a novel is not limited to its depiction of emotions and people akin to those in our own life, it stretches to an ability to describe these far better than we would have been able, to put a finger on perceptions that we recognise as our own, yet could not have formulated on our own.  (pg 28)

I’ve always been fascinated by these depths. The struggle within us to articulate something and the relief that comes when we find a way to say it. Often though we change in the process of saying it, as we suddenly recognise a state of affairs within us by virtue of being able to express it. The opposite of what de Botton calls abbreviation is what Charles Taylor calls articulation. Resources we can draw on in articulation are invaluable in an age of radical abbreviation, helping us become “newly attuned to pick up certain objects floating through consciousness” such that we are “drawn to the shades of the sky, to the changeability of a face, to the hypocrisy of a friend or to a submerged sadness about a situation which we had previously not even known we could feel sad about” (pg 29). Articulacy we develop expands outwards, sensitising us to the abbreviation we encounter around us and leaving us more adept at recovering the reality subsumed by its thin expression. This is not a call for slowness, as much as for elaboration. There’s a value in being long winded, even if it’s unlikely to get you read.

I woke up with this phrase stuck in my mind recently, after a strange and vivid dream. It involved a landscape somewhere between Deep Space Nine and Snowpiercer, dark corners filled with metallic pools and steam hissing across braying crowds. I can’t remember the narrative of the dream but a crucial idea from it remains clear in my mind.

The Last Man is about the suffocation of growth rather than the end of the impulse to grow. It is the end of resonance rather than the cessation of our search for it. It is the loss of our capacity to give birth to stars and a forgetting that we ever had it:

And thus spoke Zarathustra to the people: It is time for man to fix his goal.  It is time for man to plant the seed of his highest hope.  His soil is still rich enough for it.  But this soil will one day be poor and exhausted; no lofty tree will be able to grow from it.  Alas!  The time is coming when man will no longer shoot the arrow of his longing beyond mankind— and the string of his bow will have forgotten how to twang!  I tell you: one must have chaos in one, to give birth to a dancing star.  I tell you: you have still chaos in you.  Alas!  The time is coming when man will give birth to no more stars.  Alas!  The time of the most contemptible man is coming, the man who can no longer despise himself.

The dream left me with a vivid sense of the claustrophobia of imminence which might still be felt after this forgetting. The sense of being hemmed in, aspiring to be something more while denied the conditions which would make this growth possible.  Many of the questions I’m interested in ultimately relate to this feeling, its sociology and psychology. It’s odd to realise that I’m only now coming to understand the final object of years of work.

I saw a wonderful exhibition this weekend, collecting work by Alex Prager combining photography and film in intricately staged hyper-real scenes. The collection that has been playing on mind since seeing it is Face In The Crowd. If you click on the screenshot below, it will take you to the website where you can see the work:

Screen Shot 2018-07-30 at 13.50.27.png

The accompanying notes described how these are “dynamic tableaus where individual characters are presented in equally sharp focus, seemingly lost in their own internal conversations”. It reminds me of Hannah Starkey’s work in its fascination with how interiority plays out in social scenes, showing how private experience nonetheless has a public existence.

However I found the staging of the scenes troubling, as much as I recognise the intention behind them. It feels like the relationality is washed out, as if collectivity is exhausted by the artefact of the social situation. There’s a strange emptiness between inner and outer, with interaction reduced to staging such that the bonds of social life appear as little more than fragile constraints.

Each of these scenes is a collage of individuals rather than a collective, creating images which are sociological in their intention but not in their enactment. Individuals are either lost in the reality of their own lives or looking forlornly through the artifice of shared reality, as is the case with the red-haired woman in the image above. It foregrounds that artifice but also inflates it, losing track of how it functions as a collective tissue which knits together individual lives in the mundane interactions throughout the day.

It is scaffolding which often fades into the background, facilitating the relationality which is lost in these scenes. It is a deliberately stilted vision of the social, hugely succesful in its staging and producing an aesthetic which I find immensely unsettling.

A few months ago, I recounted to a collaborator the details of a foolish mistake I made when planning a special occasion. Assuming the cake would be the easiest item on a long to do list, I left this till last, failing to recognise that cakes of this sort would require a lot of notice. It left me phoning round in a panic, until I eventually found someone who could do it at short notice. My collaborator remarked that he too could have seen himself making such an assumption, recognising aspects of himself in the assumption I had made and the problem it had created. ‘Easiest’ to me was coded as the most immediate and straightforward task, considered in terms of its internal logic, rather than being the  most predictable, quickest or controllable. I suspect this assumption reveals something quite deep about how I’m orientated towards the world, regardless of the counter-factual question of whether I might have planned this process more carefully had I been less stressed about the impending event.

This has left me thinking about the sociology of stupid assumptions. By this I don’t mean those occasions on which we make a mistake due to rushing, error or stress that could easily have been avoided. I mean those mistakes which result from deeply held, though flawed, assumptions running up against the reality of the world. These are assumptions we might not knowingly hold yet which find themselves revealed through our actions. They are the common threads which bind together persistent missteps as we make our way through the world, reflecting a subtle incongruity between the structures of our thought and the structure of the world. They can become things we are aware of and reflect upon, even things which we struggle against. But they are persistent and deep seated, raising the question of where they come from.

The obvious answer to this is the Bourdieusian one, finding the origins of these habits of thought in our original social context. The assumptions of our natal context get reproduced in the assumption we make about the world as adults, with contextual features sedimented into cognitive habits that reflect the world as we were brought up to exist within it rather than the way it is necessarily is. This is a brief sketch but I hope it’s not a facile one because I respect this line of argument and I believe I understand it, even if it’s not possible to convey its depth and sophistication in a short blog post.

Nonetheless I wonder if it can account for the feeling of recognition which my collaborator felt when recognising my stupid assumption as something akin to his own? Can it account for the recognition we come to in ourselves, often isolated from an awareness of class and upbringing because it relates to an assumption so specific that it can be claimed to be inherited only in the tautological sense that it must have come from somewhere? Can it account for the role of technologies in fermenting these assumptions? In my case, I suspect the problem is as much to do with the constraints of the to do list, something I rely upon to an immense degree (as does at least one of my parents), failing as it does to capture contingencies surrounding a task in the sequential logic it imposes upon our tasks. These aren’t really counter-arguments as much as requests for elaboration, reflecting my newfound belief that the sociology of stupid assumptions tracks some of the most interesting questions in social theory.

Why do people do what they do? It is a question at the heart of the human sciences but it is also one we ask in everyday life. However the way we ask it often tracks our prior feelings towards the people we ask it of. For instance, as Jana Bacevic has argued, many fail to grasp the agency of managers and consultants within the ‘neoliberal university’ and through doing so misdiagnose both the intentions of these managers and the system their actions are contributing to.

When people seem to embody systemic tendencies we are critical of, it inevitably slides into a disproval of the people themselves. They are reduced to vectors of these organisational processes, revealing a quotidian Althusserianism which is important to understand if we want to grasp how organisations and systems are imagined by participants within them.

There was an example I came across earlier today which made me think about the role of distance in this process. This is an observation which Richard Brooks makes on pg 14 of his Bean Counters: The Triumph of the Accountants and How They Broke Capitalism:

Few arrive with much sense of vocation or a passion for rooting out financial irregularity and making capitalism safe. They are motivated by good income prospects even for moderate performers, plus maybe a vague interest in the world of business. Many want to keep their options open, noticing the prevalence of qualified accountants at the top of the corporate world; one quarter of chief executives of the FTSE100 largest UK companies are chartered accountants.

The exercise of agency here is provisional and tentative. Rather than rapacious instrumentalists concerned only with maximising their income, we find people keen to keep their options open and seeking agreeable outcomes without hemming themselves in. It is similar to the claim made by Kevin Roose in his superb account of the everyday lives of young financiers on Wall Street:

As strange as it sounds, a big paycheck may not in fact be central to Wall Street’s allure for a certain cohort of young people. This possibility was explained to me several weeks before my Penn trip by a second-year Goldman Sachs analyst, who stopped me short when I posited that college students flock to Wall Street in order to cash in. “Money is part of it,” he said. “But mostly, they do it because it’s easy.” He proceeded to explain that by coming onto campus to recruit, by blitzing students with information and making the application process as simple as dropping a résumé into a box, by following up relentlessly and promising to inform applicants about job offers in the fall of their senior year—months before firms in most other industries—Wall Street banks had made themselves the obvious destinations for students at top-tier colleges who are confused about their careers, don’t want to lock themselves in to a narrow preprofessional track by going to law or medical school, and are looking to put off the big decisions for two years while they figure things out. Banks, in other words, have become extremely skilled at appealing to the anxieties of overachieving young people and inserting themselves as the solution to those worries. And the irony is that although we think of Wall Street as a risk-loving business, the recruiting process often appeals most to the terrified and insecure.

I’m not suggesting we should take people’s accounts of why they do what they do at face value. If we did, the space to be critical of power and hierarchy would soon collapse in the face of an endless succession of people with apparently good intentions of varying degrees of systematicity. But it does seem important that we also avoid taking our attributions of agency at face value, interrogating what we are imputing to people and the reasons why we might be imputing them.

After the busiest few months of my life, I’ve spent the last couple of days doing what feels like nothing. I’ve been for a shave, bought a graphic novel, seen a (crap) film, had a walk, been out for dinner and had a massage. But otherwise I’ve just read, slept and watched tv. It’s obviously not the case that I have done nothing because I have just listed a sequence of activities which I’ve undertaken. But these have been chosen haphazardly, reflecting little more than my fleeting whims and my ready-to-hand preferences as I intentionally refrain from doing anything even remotely challenging.

This has left me reflecting on changing tempo in existential terms. What happens when we move between two periods of time with distinct and different rates of motion? The last few days have been a striking example of that for me, as I’ve moved from a crescendo of activity over the last four weeks (six trips in three countries, doing lots of work while I go) culminating in the Undisciplining conference in Gateshead last week before suddenly embracing nothing. My days have been increasingly defined by rhythms outside my immediate control, leading up to four days in which the closest I had to self-directed time was reading before bed. By changing tempo I refer to the pacing of activities (what do I have to do each day?) and their self-directedness or otherwise (to what extent am I choosing to do it?) but there are other ways in which we could characterise these temporal shifts.

The sudden transition into deliberate inactivity strikes me as interesting. Though it does occur to me that intellectualising the experience could easily be read as part of a pathological flight from rest which I’m prone to. I often feel strange when I change tempo. Many people do and I think this is at the heart of what people mean when they talk about coming down from events. The drug analogy works because it tracks what can meaningfully be called a different state of consciousness resulting from an (over) abundance of stimuli, a sudden increase in the quantity of interaction, an absence of time alone, a spatio-temporal break from normal routine or some combination thereof. But these experiences, returning to normal life after a specific intense event, should be understand as a particular instance of a more general trend. Put it this way: what do we mean when we say an event was ‘intense’? It conveys that much went on within a particular period of time. We understand these judgements because we recognise the intensivity of our days varies in patterned ways which are susceptible to analysis, even if sociologists of time are the only people who would use language like this to make sense of these patterns.

The opposite tendency for me is one particularly associated with when I lived alone. An extended period of working from home would often leave me failing to converse with anyone for up to a few days, beyond perhaps talking on the phone. When I’d then enter a period of more intense obligations, it would often be hard to get into the swing of things.  Certainly, the fact most of these were on the other side of the country from where I then lived imposed a spatio-temporal logic on the sequences which accentuated the change tempo. But I still get similar feelings now, even though my life is ordered in a way which makes the contrast between phases less stark and sharp than it would otherwise be.

If social synchronisation is unwinding then changing tempo will become an ever more common experience. If we can’t take temporal regimes for granted, as collective realities grounded in shared institutions, temporal heterogeneity within a life will become increasingly normal. Therefore I think it’s important we understand experiences like changing tempo, as well as the mechanisms underlying them and accounting for commonalities and differences in how we experience them.

One of the key ideas of my PhD was fateful moments, points in our life which constitute turning points and shape the person we become. I argued the epistemology of such moments is more complex than it might initially appear to be, as turning points have a narrative as well as a biographical existence. The stories we tell about our lives rarely map cleanly on to the causes which shape them, being the means through which we come to terms with what has happened rather than the way in we diagnose why it happened in the way that it did. Turning points give shape to narratives and are often easy to identify but fateful moments are more elusive.

There are points which might have proved fateful, opportunities to act differently which might not be clear to us in the moment. We can recognise this in hindsight, identifying the point at which we could have backed away or fully committed, open moments in which we had freedom to act before the tides of habit and commitment generated a momentum of their own. There’s a wonderful example of this on pg 223 of Donna Tartt’s The Secret History. After a disturbing revelation, Richard briefly questions his relationship with his new friends, fleetingly contemplating exit at the point where his entanglement with them seems likely to pass the point of no return:

Pausing unsteadily on the stairs, I looked back at Francis’s door, indistinguishable from the others in the long faceless row.

I suppose if I had a moment of doubt at all it was then, as I stood in that cold, eerie stairwell looking back at the apartment from which I had come. Who were these people? How well did I know them? Could I trust any of them, really, when it came right down to it? Why, of all people, had they chosen to tell me?

It’s funny, but thinking back on it now, I realise that this particular point in time, as I stood there blinking in the deserted hall, was the one point at which I might have chosen to do something very different from what I actually did. But of course I didn’t see this crucial moment then for what it was; I suppose we never do. Instead, I only yawned, and shook myself from the momentary daze that had come upon me, and went on my way down the stairs.

Fateful moments that might have been pose their own epistemological challenges, as the narrator alludes to in the extract above. But unlike fateful movements in the positive sense, identified by a person as they make sense of how they came to be where they are and who they are, fateful moments which escaped us are engaged with in a diagnostic moment. We identify these when we look back on our lives, reflecting on how things could have been otherwise. They have a different mode of existence to us as we make sense of our own lives, tied to a sense of our lives as taking on definitive shape as our decisions accumulate. In this sense, I think recognising fateful moments which might have been has immense existential significance. We cannot accept the mess of life without doing so.

In a recent paper, I’ve argued we find a cultural project underpinning ‘big data’: a commitment to reducing human being, in all its embodied affective complexity, stripping it of any reality beyond the behavioural traces which register through digital infrastructure. Underlying method, methodology and theory there is a vision of how human beings are constituted, as well as how they can be influenced. In some cases, this is explicitly argued but it is often simply implicit, lurking beneath the surface of careful choices which nonetheless exceed their own stated criteria.

It’s an argument I’m keen to take further than I have at present and reading Who Cooked Adam Smith’s Dinner by Katrine Marçal  has left me interested in exploring the parallels between homo economicus (and why we are invested in him) and the emerging homo digitalis. Marçal writes on pg 162 of the allure of the former, misunderstood if we see it as nothing more than an implausible theoretical construct or a mechanism to exercise influence over political decision-making:

Many have criticized economic man’s one-dimensional perspective. He lacks depth, emotions, psychology and complexity, we think. He’s a simple, selfish calculator. A caricature. Why do we keep dragging this paper doll around? It’s ridiculous. What does he have to do with us? But his critics are missing something essential. He isn’t like us, but he clearly has emotions, depth, fears and dreams that we can completely identify with. Economic man can’t just be a simple paper doll, a run-of-the-mill psychopath or a random hallucination. Why, if he were, would we be so enchanted? Why would we so desperately try to align every part of existence with his view of the world, even though collected research shows that this model of human behaviour doesn’t cohere with reality? The desperation with which we want to align all parts of our lives with the fantasy says something about who we are. And what we are afraid of. This is what we have a hard time admitting to ourselves. Economic man’s parodically simple behaviour doesn’t mean that he isn’t conjured from deep inner conflicts

What makes homo economicus so compelling? This allure has its roots in a denial of human dependence, describing on pg 155 how our fascination with “his self-sufficiency, his reason and the predictable universe that he inhabits” reflect discomfort with our once having been utterly dependent on others, “at the mercy of their hopes, demands, love, neuroses, traumas, disappointments and unrealized lives”, as well as the inevitability that we will be so again at the other end of the life-course. But he also embodies a vision of what life should be like between the two poles of dependency, as she writes on pg 163:

His identity is said to be completely independent of other people. No man is an island, we say, and think that economic man’s total self-sufficiency is laughable. But then we haven’t understood his nature. You can’t construct a human identity except in relation to others. And whether economic man likes it or not –this applies to him as well. Because competition is central to his nature, his is an identity that is totally dependent on other people. Economic man is very much bound to others. But bound to them in a new way. Bound to them. Downright chained to them. In competition. If economic man doesn’t compete, he is nothing, and to compete he needs other people. He doesn’t live in a world without relationships. He lives in a world where all relationships are reduced to competition. He is aggressive and narcissistic. And he lives in conflict with himself. With nature and with other people. He thinks that conflict is the only thing that creates movement. Movement without risk. This is his life: filled with trials, tribulations and intense longing. He is a man on the run.

If I’m right about the existence of homo digitalis, a clear vision of human constitution underpinning ‘big data’*, we can ask similar questions about this truncated, eviscerated, predictable monad. So complex when we look up close, so simple when we gaze down from on high. Our individuality melts away in the aggregate, leaving us no longer overwhelming but simply overwhelmed. Manageable, knowable, stripped back. Why might this be an appealing vision of human kind? Who might it be appealing to? I’m sure many can guess where I’m going with this, but it’s a topic for another post.

*A term I use to encompass digital social science, commercial and academic, as well as the organisations and infrastructures which it facilitates.

Digital Infrastructures: Poetics, Politics and Personhood – AAA San Jose 14-18 November 2018
Lorraine Weekes (Stanford University)
Gertjan Plets (Utrecht University)

Government databases, digital archives, online voting systems, and e-portals enabling the submission of everything from insurance claims to income tax returns increasingly define mundane engagements between citizen-users and a suite of public and private institutions across social arenas. Because of efficiency and transparency digital technologies are seen as affording, reliance on digital infrastructures has become widely supported on the ground. At the same time, sociopolitical structures and assumptions encoded in many of these infrastructures—and the entanglements they produce—have received little attention. The tendency of infrastructure to remain invisible until something goes wrong is perhaps especially acute in digital and high-tech contexts where the scale, technological complexity, and physical diffusion encourages black boxing. By putting the politics and poetics of digital infrastructure into the limelight, this panel will consider the historical and ethnographic dimensions of digital infrastructures and how they produce individual subjectivities, mediate power relationships and further existing reifications of the social across the globe. By bringing the theoretical insights of the burgeoning anthropology of infrastructure and bureaucracy to bear on the digital networks and assemblages, the papers in this panel endeavor to make the materiality, social-embeddedness, and historical contingency of digital infrastructure visible.

Please submit an abstract before April 2 or send enquiries to g.f.j.plets@uu.nl<mailto:g.f.j.plets@uu.nl>

What is a ‘student’? To many outside higher education, such a question would seem absurd. A student is “a person who is studying at a university or other place of higher education”. But what this means has undergone profound change in recent years, such that ‘the student’ as a category, as well as a material factor within the university, encompasses a whole range of mutually exclusive roles. This is how Clive Barnett describes them in an incredibly insightful blog post:

And it’s worth noting, in the middle of all this, just how variable the subject of ‘The Student’ has become. It’s easy to bemoan the idea that students are increasingly treated as consumers, but it in fact students are figured in various ways in contemporary higher education policy and strategy: as future recruits, they serve as security against which Universities can secure loans; they are quite publicly presented, amazingly, as superficial air-heads who are easily dazzled by ‘shiny buildings’ when making life-changing decisions; they are expected to be only ever motivated as utility-maximisers by the promise of future earnings in their choices and expectations and satisfactions (giving rise to a weird sense of what ‘vocational‘ means in education, which is reduced to quite instrumental ideas about value for money; which doesn’t leave much space for the idea of a calling, a passion, a life’s worth of mission); and, rather importantly given the debt-leveraged nature of all this building work, as reliable rent-payers. And this disaggregation of ‘The Student’ into a dispersed range of abstract singularities facilitates in turn the re-aggregation of “student voice” and “student experience”, always and only ever spoken-for by University managers.

Some of these are purely cultural, identifying the category in certain ways unlikely to have effects upon the person occupying the role, beyond leaving them exposed to all manner of management guff. Others are purely structural, such that each fee-paying student registers as a “reliable rent-payer” regardless of their awareness or understanding of this mechanism. However many exists unevenly between the two, disrupting the student experience just as “student experience” becomes an object of managerial intervention.

Caught between these powerful forces, disaggregated and re-aggregated through rapidly evolving cultural and structural mechanisms, we find the real people who are the students. If we understand the social ontology of the student adequately, would it be possible to conceive of a ‘student experience’ agenda which addresses them as people? Can we use the discursive room which the prominence of ‘student experience’ opens up to find ways of encouraging and facilitating student voice which represent students in their totality?

While Margaret Archer’s theoretical work is widely respected, it is often categorised as little more than an elaboration of Roy Bhaskar and a critique of Anthony Giddens. This framing leaves it secondary to Critical Realism and Structuration Theory, understandable (though limiting) in the former case and deeply inaccurate in the latter case. Reading Envisioning Sociology by John Scott and Ray Bromley has left me wondering whether she should be categorised as a neo-classical British social theorist, as her work embodies many of the theoretical themes which concerned early British social theorists whose influence and ideas have largely now been forgotten.

The obvious counter-argument to this reading is that she has never, to the best of my knowledge, engaged with the work of Patrick Geddes or Victor Branford. Indeed, as we discussed in the interview here, many of her foundational influences came in a Sociology department at LSE which was profoundly shaped by Geddes losing the opportunity to become the first chair of Sociology after messing up his interview. It’s a fascinating counter-factual to imagine what the intellectual culture might have been like in a Geddes run department and the implications this would have had for subsequent generations of PhD students imbibing those influences.

This reading is interesting because it overcomes the tendency to see Archer’s earlier and later work as separate phases, whether as a mystifying turn to the individual (by critics) or as an attempt to solve critical realism’s agency problem (by fans). What struck me when reading Envisioning Sociology was how closely the notion offered by Geddes and Branford that “Humans … are born not only into a physical environment and material heritage but also into historically specific social relations that comprise their ‘social heritage’” (loc 2803) resembles Archer’s own account of the natural, practical and social order. As they contextualise these influences on loc 2777:

In the early debates of the Sociological Society, as exemplified by the articles in the three volumes of Sociological Papers , including most of the works presented to the founding meetings of the Society in 1904, 1905, and 1906, the primary debate is between eugenics, as advocated by Francis Galton, Benjamin Kidd, and others, and civics, as advocated by Patrick Geddes, Victor Branford, and others. It was apparently a classic “nature versus nurture” debate in which the two alternative explanations of the human condition are genetics and culture. In reality, however, Geddes’s position was much more nuanced, combining the biological and social sciences and focusing on ecology, environment, and culture. To Geddes, the human condition had three major features: our biological nature as mammals within ecosystems; our social nature as the species uniquely possessed with powers of speech, culture, and spirituality; and, our activist nature, as a species, like ants, possessed with a tremendous capacity for social organization, construction, and improvement in the physical and social environment.

What makes these ideas jarring to read in the present day is how they traverse disciplinary boundaries in their attempt to engage with social, practical and natural reality in its totality:

This was the grand idea that Geddes and Branford sought to bring to sociology, a multifaceted vision of an activist society in its cultural and biological context. Their idea was grander and more sophisticated than eugenics, but they never managed to make it fit into universities because twentieth-century academia had more restrictive concepts of disciplines and expertise. Instead, a much narrower concept of sociology took root, a much narrower version of planning emerged as a separate discipline, and biology, psychology, anthropology, and education all took their separate courses.

This I would argue is precisely the promise that Archer’s social theory has begun to reclaim, integrating a developmental sociology of the individual, a meso-sociology of conflict/consensus and a macro-sociology of transformation/stability. There are the foundations for a radical reorientation of social theory and social science in this work, capable of reclaiming the lost promise of the classical British social theorists. I’m concerned that their current categorisation as ‘critical realist Sociology’ means this value is unlikely to be realised.

We often think of self-narrative as something self-grounding, reflecting the truth of a person even if that truth might change over the life course. If we take issue with this, we turn to the bare objective facts of someone’s life as a counterpoint to the unreliably subjective stories they tell. This oscillation misses the important relationship between the two, as subjective and objective weave together in forming us as the person we are as a continuous outcome of the life we are leading. As Ann Oakley writes on loc 3628 of Father and Daughter:

The stories of our lives have to be condensed and elliptical because otherwise they’re boring and the central themes get lost. We must convince ourselves as well as others of logic, linearity, evolutionary progress: it was like this, it must have been, because it’s such a good story.

These narrative imperatives reflect a gap between the reality of our lives and the stories we tell about them. They only emerge because there is a steady accumulation of potential facts, a piling up of individual elements which could be ordered in many different ways. What interests me is narrative as an interface, the point at which we struggle to make sense of who we are (and have been) through the accounts we give of ourselves, internally and to external others. 

I’ve been curious for a while about the Bullet Journal system. As an obsessive practitioner of Getting Things Done, I can’t see myself starting a Bullet Journal but its framing as ‘the analogue system for a digital age’ has intrigued me since I first encountered it. The video below provides an overview of how to keep a bullet journal:

The basic ontology of a bullet journal incorporates tasks, events and notes. These are incorporated into an organisational structure built around four core modules: index, future log, monthly log and the daily log. The bullet journal enables you to “track the past, organise the present and plan for the future” by providing a framework through which future plans become present commitments and past actions. If I understand correctly, it’s basically a funnel through which your plans over a six-month window get cashed out as monthly and daily priorities. The importance accorded to reflection ensures that commitments can be dropped along the way. It is a “customizable and forgiving” system for self-organisation, built around a hybrid journal which is a combination of “to-do list, sketchbook, notebook, and diary”.

I find it hard not to wonder if some of the appeal rests on paper-fetishism. This certainly plays a role in how Bullet Journal markets itself. For instance this video frames notebooks as a “creative playground” through which we “breath life into ideas”:

I can see the appeal of having an artefact like this. Externalising your commitments into an application like Omnifocus can be a hugely effective way to organise your time, once it has become a habitual process. It can be enormously practical as well, if you’re liable to lose your bullet journal, write indecipherably or otherwise fail to exercise the physical care in relation to an artefact which a system like Bullet Journal requires. But you can’t hold your Omnifocus. You can’t flick through it. Much of this lack is aesthetic. Reliance on a digital system precludes certain experiences which an analogue system facilitates.

I wonder if there are also practical losses as well. Could some modes of reflection be foreclosed by the insubstantiality of the system? Getting Things Done as a system relies on the series: “a number of events, objects, or people of a similar or related kind coming one after another”It reduces all our projects to the same basic ontology: an interlinked series of actionable steps through which we cumulatively bring about a substantial outcome. This reduction is what makes it so powerful. The value of Omnifocus lies in it giving us powerful tools through which to calibrate this reduction. But it also carries the risk of eviscerating the lived meaning of these projects, particularly when enacted through a digital system. This problem is inherent to the moral psychology of the to-do list:

This is the mentality that cognitive triage generates: things are conceived as obstacles to be eliminated rather than activities to be enjoyed. As the list gets bigger, it becomes harder to see the individual ‘to do’ items as activities in their own right. They are reduced to uniform list items and nothing more. Things you enjoy and things you despise are given equal weight. The logic of the to-do list is one of commensurability and this is the problem with it. The process of triaging combined with the logic of the to-do list can lead to an evisceration of value: the potential goods internal to activities, those experiences of value that can only be found through doing, get obliterated by the need to cross items off a list.

https://markcarrigan.net/2015/01/29/productivity-culture-cognitive-triage-and-the-pseudo-commensurability-of-the-to-do-list/

Might Bullet Journals help preserve the relational richness of our projects, opening out powerful modes of engaging with them while closing down the conveniences which digital systems afford? I’d be curious to hear what others think. Particularly anyone who has used Omnifocus and/or GTD before moving to a Bullet JournalMy hunch is there’s a basic trade-off here between convenience and reflection. It’s easy to slip into using Omnifocus/GTD in an unreflective way but the brute physicality of the Bullet Journal renders that largely impossible. Many might stop using their notebook as a Bullet Journal but if you stick to the practice itself, it more or less ensures you use it in a reflective way.

The individual is an unpopular category within contemporary social thought. To be concerned with the individual is taken to imply individualism, something which falls outside the range of acceptability for the cultural politics prevalent within British sociology. This is amplified by an intellectual impulse to transcend the individual as a unit of analysis, bound up within the formation of sociology as a science of social facts in distinction to the psychological domain of facts about specific individuals.

Yet my interests have always led me towards the individual as a unit of analysis, the main category within which I think and the reference point for any explanations of the social world I offer. I spent six years working on a PhD about personal morphogenesis: conceptualising what it is for someone to change and what it is for them to stay the same, as well as developing a framework through which we can study these processes in a sociological way. It’s something which I largely stopped thinking about, reflecting the exhaustion of spending so long preoccupied by it within the occasionally stultifying institutional confines of the PhD process. But it underlies pretty much everything I do, furnishing my deliberations with the essential categories through which I analyse the world around me.

My claim is not only that there is a specificity to human life which is missing in most accounts, this detail is crucial to understanding the fine-grained aspects of how history unfolds rather than being an enticing epiphenomenon which risks distracting us from the real business of politics and economics going on elsewhere. This is something which Ann Oakley expresses beautifully on loc 567 of her Father and Daughter:

Its credo (or theory) is that only through the lives of individuals are we really able to get a hold on all those complexities of experience and motivation which make up human history.

Far from the individual being a retreat from sociological explanation, it is a condition for any thorough explanation. There are many questions which do not invite direct reference to individual lives but all adequate explanations are consistent with the reality of such lives, holding out the promise that the abstractions in which these explanations proceed could in future be rendered concretely in a way faithful to the experience of those concerned. As Oakley goes on to write:

My heroine, the social scientist Barbara Wootton, once pointed out that: ‘Life stories are never easily told, even when their authors are genuinely concerned more with accuracy than with self-exculpation; and the biographies of those who defy the standards of their own society are doubly difficult to get straight’. 2 Biography and autobiography are vehicles for exhibiting an age; they help us to understand processes of social change through the medium of individual lives.

This is still what I want to do. I’ve realised recently that I’m going to have to return to the themes of my PhD if I want to develop as an analyst of social life, as much of part of me doesn’t want to go back to what felt like a unfinished project even after six years.

A line amongst fragments of the Greek poet Archilochus says ‘The fox knows many things, but the hedgehog knows one big thing‘. This was the inspiration for Isaiah Berlin’s famous essay on the hedgehog and the fox. Berlin takes these words figuratively to illustrate a divide between two styles of thinker:

For there exists a great chasm between those, on one side, who relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel – a single, universal, organising principle in terms of which alone all that they are and say has significance – and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related to no moral or aesthetic principle

He admits himself it is an “over-simple classification” which could become “artificial, scholastic and ultimately absurd” if taken too far. But it captures something real, if only a fragment of a more complex reality. One way to flesh out its meaning might be to look for examples of each, identifying the fox’s way of being in the world. How are these traits embodied in actual lives, manifested in actual people making their way through the world?  On loc 591 of their Envisioning Sociology, John Scott and Ray Bromley describes Patrick Geddes in these terms:

Geddes was himself chaotic and disorganized. He produced a constant stream of ideas, but had neither the inclination nor the discipline to work them out systematically or in detail. A man of enthusiasms, he flitted from one to another and worked on each only so long as he felt that he was contributing some shaping or directive insight. He was invariably interested in the next project rather than the current one, his mind racing ahead to future endeavors, and he left his disciples to undertake the systematic and detailed work that he eschewed for himself. Some of these disciples, as a result, became closely associated with his ideas and innovations, claiming that all they wrote had been derived from or inspired by Geddes.

Another example which immediately comes to mind was Leibniz, described in the The Courtier and the Heretic as “the Great Gatsby of his time, always believing in the green light in the distance, the ever receding destination of all our efforts” (pg 301). This is how the author describes him on pg 91:

The number of projects that Leibniz managed simultaneously was almost always an order of magnitude greater than eight. When an idea flared in his kinetic mind, he would grab it like a torch and run until the next bright light caught his eye, and then he would add that one to the bundle in his arms, too, dropping a few others in his haste and so leaving behind a trail of smoldering visions. In the 120 volumes’ worth of material in the Leibniz archives, there are without doubt hundreds of sparkling inventions that have yet to be catalogue, let alone realized. He wrote about everything, to everybody, all the time.

Could anyone offer other examples? I’d like to collect biographical portraits of foxes and hedgehogs.

Anyone who has read my blog for a while will be aware that I use it to self-archive. As Cory Doctorow explains in this wonderful piece, it’s a mode of information storage suitable for those whose working lives revolve around the identification, evaluation and retrieval of information:

I consume, digest, and excrete information for a living. Whether I’m writing science fiction, editorials, columns, or tech books, whether I’m speaking from a podium or yammering down the phone at some poor reporter, my success depends on my ability to cite and connect disparate factoids at just the right moment.

As a committed infovore, I need to eat roughly six times my weight in information every day or my brain starts to starve and atrophy. I gather information from many sources: print, radio, television, conversation, the Web, RSS feeds, email, chance, and serendipity. I used to bookmark this stuff, but I just ended up with a million bookmarks that I never revisited and could never find anything in.

Theoretically, you can annotate your bookmarks, entering free-form reminders to yourself so that you can remember why you bookmarked this page or that one. I don’t know about you, but I never actually got around to doing this — it’s one of those get-to-it-later eat-your-vegetables best-practice housekeeping tasks like defragging your hard drive or squeegeeing your windshield that you know you should do but never get around to.

Until I started blogging. Blogging gave my knowledge-grazing direction and reward. Writing a blog entry about a useful and/or interesting subject forces me to extract the salient features of the link into a two- or three-sentence elevator pitch to my readers, whose decision to follow a link is predicated on my ability to convey its interestingness to them. This exercise fixes the subjects in my head the same way that taking notes at a lecture does, putting them in reliable and easily-accessible mentalregisters.

Blogs are far from the only way to produce what Doctorow describes as “a central repository of all of the fruits of my labors in the information fields”. The commonplace book is an obvious precursor to the research blog. Luhman used a terrifyingly intricate filecard system. C. Wright Mills advocated a file or journal to keep track of ‘fringe thoughts’. Any system will entail certain constraints and affordances for your self-archiving. However, the usefulness of an archiving system will depend as much on how you use it as on which system you choose.

It occurred to me recently that my self-archiving has become inconsistent. Whereas I went through a phase of putting everything on the blog, often leading to five or six posts per day, it’s now spread across a number of systems:

  1. Highlights and notes in Amazon Kindle
  2. Ideas grouped together in talks on Artefact cards
  3. Resources archived in a number of e-mail folders
  4. Points to explore archived in Notability
  5. Ideas placed directly into ongoing writing

The first point troubles me because I despise Amazon yet become more dependent upon them with each day I use this system. The latter one in particular troubles me because I tend to over-write. I find producing words relatively easy and it’s only in the few years since my PhD that I’ve learned to edit myself properly. Therefore ‘fringe-thoughts’ that immediately find expression might end up being lost.

The urge to capture everything might seem obsessive. However, there’s something genuinely exciting about the idea of building a living archive of your thought over a period of decades. This blog has been active since 2010, encompassing 3426 posts over almost 7 years. Looking back on it, I’m struck by how much my thought has changed in that time but also how many continuities there are. Things I struggled to express years ago are now clearly defined questions I’m addressing in my research.

Perhaps I need to put more effort into this, embracing intellectual self-archiving as a commitment rather than merely a habit. Though it seems possible that simply having written this post, articulating the issues and archiving it in my ‘outboard brain’, might be sufficient to change my practice. It often is and that’s one of many things which is so engrossing about intellectual self-archiving.

In today’s Guardian, Neal Lawson offers a cautious reading of Corbyn’s Labour, accepting the ascendancy of the left within the party but urging it to look outwards. I’m sympathetic to many of the substantive points Lawson makes in the article but there’s a rich vein of problematic assumption running through their articulation which needs to be challenged. I’m pretty sure that in Lawson’s case, the peculiar style of fin de siècle social theorising once dominant within British sociology, about which I wrote a PhD thesis, played a crucial in consolidating this outlook.

However, the problem extends beyond those who have taken Giddens, Beck and Bauman’s diagnosis of late modernity a little too seriously. In fact, I’d suggest the popularity of the aforementioned authors was in part due to their reflecting an emerging common sense, rather than being the originators of these influential ideas and motifs. In recent years, we’ve seen this transmute into what I increasingly think of as the ideology of platform capitalism: disruption has become the last refuge of the third way

I recognise that Lawson is as far on the left of this movement as it is possible to be, though he so uncritically reproduces some of its core axioms that it would be a mistake to identify his core ideological home as anywhere else. The combination of business and activism, profit and principle, found in his own biography is a striking expression of the ethos of New Labour. There are two core assumptions underlying his article which need to be pulled out, analysed in their own right and dispensed with:

  1. Social democracy “lost its power” because “a lack of responsiveness and heavy doses of paternalism made state socialism unpopular” while “the idea of free markets chimed with a more individualistic age”. It is a purely cultural reading of an epochal shift, with one idea ‘losing its power’ while another becomes dominant because it ‘chimes’ with the spirit of an (assumed) new age. The historical variability of how centre-left parties have struggled in recent decades, something which can’t meaningfully be considered in abstraction from the ‘modernising’ strands dominant within so many of them, finds itself reduced by Lawson to the (empirical) decline of a particular phase in the existence of a single welfare state. Explanation of this trend is replaced by a woolly historical narrative, in which one set of ideas loses to another because of a vaguely specified epochal shift. It’s pure Giddens: the collective gives way to the individual, the traditional to the modern, the secure to the flexible. It’s neither explanatory nor descriptive in any straightforward sense.
  2. The spirit of the age is “networked and collaborative” and “21st-century socialism will be participatory”. After all, “things move fast and nowhere is this truer than in politics” where, warns Lawson, we see a “swarm” which “can and will keep shifting”. The conceptual structure of this is analogous to the ‘cult’ accusations made by the Labour right: a nascent movement is reduced into a behavioural compulsion gripping a mass, driven in this case by the affordances of digital media and the susceptibility of millennials to be swept along. It’s a refusal to engage with the reality of the events taking place, reducing them into an epochal schema in order to advance a prior set of axioms about how ‘progressive’ political ends ought to be pursued. It is already decided by the analyst that the actors at what Filip Vostal terms ‘mega-forces’ (globalisation, technology, acceleration, digital media) so the empirical actors are reduced to manifestations of these forces.

This is only a brief attempt in response to an article I largely agreed with on a practical level. But the hunch I’m increasingly driven by is that ‘networked socialism’ is a re-articulation of ‘social markets’: it’s an ideological vehicle which, though sometimes correct on substantive issues, imports the conceptual structure of the ‘third way’ into debates about the future of the left.

Does social media lead to a devaluation of introspection? This is what Nick Couldry and Andreas Hepp claim on loc 4098 of their The Mediated Construction of Reality:

The selfie stamps the marker of ‘the self’ onto whatever things a person wants to record as a way of increasing its value. But why should that have become so important recently? There are no doubt many overlapping factors at work here including the changing affordances of smartphones, but one background factor, we want to suggest, is the increasing devaluation of introspection: that is, reflecting, comparing, building the basis of a memory through organized thought that remains ‘internal’ (still unshared). Introspection, in the habit of taking selfies, gets overridden by the ‘higher’ value of generating an exchangeable trace of one’s ‘experience’ whose form is tailored exactly to the data-based needs of social media platforms.

This is an example of why I think Margaret Archer’s work on reflexivity might prove extremely powerful in making sense of how social media is reconfiguring subjectivity. Couldry and Hepp assume here a zero-sum relation between interiority and exteriority, as if the disposition to share (cultivated through repeated exposure to the incentives of the platform) necessarily implies the diminution of introspection. There is certainly a tension between these internal and external moments: it is a matter of the time available to the agent and the duration of their subsequent mental activity if nothing else. However, there are many ways in which this tension could be negotiated, reflecting characteristics of the people concerned and the situation they find themselves in.

This is what I think of as reflexive variance: the variety of ways in which individuals orientate themselves to their situations, linking self and circumstances through the generation of action trajectories. Recognising reflexive variance is something which sociology has never been good at because it is a phenomenon which sits uneasily at the intersection between the domains of psychology and sociology. It is a matter of introspection, social action and environment: the relation which obtains between them in a particular situation. It’s much easier to leave the introspective to the psychologists (who circumscribe its objects by admitting only a limited range of social referents) or to subordinate it to social action or to the environment through various theoretical devices. But the diversity with which people orientate themselves to what are empirically similar experiences will tend to get lost in this case.

There are descriptive and explanatory problems which emerge from this. However, it also facilitates cultural critique of a rather irritating sort, with identifiable trends afflicting some within a group being assumed to hold true for all members of that group (or even all groups, if the critic in question is prone to overstatement). I’ve been thinking a lot in the last couple of months about the conceptual structure which is common to many of the most prominent critics of digital media for its postulated consequences for young people. It strikes me that it rests on a denial of reflexivity variance and repudiating these critics will involve recovering the range of ways in which people respond to social media.

My relationship with the work of Zygmunt Bauman, Anthony Giddens, Richard Sennett and Ulrich Beck has been a complicated one. Discovering their work as an intellectually frustrated philosophy student led me to move sideways into a sociology department rather than starting a PhD in political philosophy. Their approach excited me, opening up the possibility that we could talk about the contemporary age in a way which captures the most intimate aspects of personal experience and connects them to sweeping accounts of historical change. However the further I went into sociology, the more sceptical I became about the capturing and the connecting these accounts claimed to do.

They have little empirical grounding in their own right, painting complex processes in seductively broad brush strokes, despite their pose of being attuned to the bleeding edge of social change. Much of my eventual PhD was animated by a conviction that the framework of ‘late modernity’ often doesn’t help social analysis, even sometimes hindering it, offering a series of intoxicating motifs for rendering empirical findings in thematic terms rather than offering any practical conceptual instruments for analysing them. This entire body of work has been persuasively diagnosed by Mike Savage as epochal theorising:

The social sciences, and especially sociology, abound with epochalist thinking (see generally Savage 2009). We are seen to have moved, variously, to a globalised, post-modern, neo-liberal, informationalised, cosmopolitan, (and so forth) world order. Such thinking saturates debates about social change and incites an almost constant agitation for detecting new kinds of epochal change and transformation which makes our contemporary times different from anything that comes before.

https://stratificationandculture.wordpress.com/2014/07/01/sociological-ruminations-on-piketty/

In an earlier article, Savage and Burrows describe this as a “kind of sociology which does not seek to define its expertise in terms of its empirical research skills, but in terms of its ability to provide an overview of a kind that is not intended to be tested by empirical research”.

The manner in which these accounts capture the intellectual attention space, at least under the peculiar epistemic condition of the accelerated academy, renders them much more problematic than would otherwise be the case. These bodies of work become crucial intellectual reference points which enjoy an influence that vastly exceeds their intellectual merit e.g the relatively recent Liquid Modernity has received 11,000+ citations, over four times more than the much older and vastly superior Legislators and Interpreters. They exercise a gravitational pull over the field of empirical research, even when they’re remarkably ill suited for this purpose.

But perhaps I’ve been too harsh. In this paper Simon Susen makes a casual remark that “One could hardly think of a more ambitious and timely challenge than the task of accounting for the distinctiveness of the contemporary age“. I realise that I agree with him, even if I continue to take issue with many of the attempts that have been made to do this. We deserve better accounts of the distinctiveness of the contemporary age, even if the conditions within which we work makes it difficult to develop them.