This is the fifth of Walter Benjamin’s thirteen rules for writing. I would love to know more about what this meant in practice to him. How often did he record his ideas? Where did he record them? How did their quantity and quality wax and wane in different circumstances? My conviction that blogging constitutes a technology of scholarly attentiveness rests on its capacity to habituate this practice.

The term ‘curation’ has got a bad press in recent years. Or rather the use of the term beyond the art world has. To a certain extent I understand this but I nonetheless always feel the need to defend the term. There are a few reasons for this:

  • In a context of cultural abundance, selection from variety becomes important within a whole range of contexts. Inevitably, it is something most people within these contexts will do most of the times. But ‘curation’ is becoming a specialised activity, even if detached from a specific social role.
  • I’m prone to thinking of what I do, at least some of the time, as curation. I spend quite a lot of time each week sorting through mailing lists, newsletters, websites, blogs and social media to identify relevant content for The Sociological Review’s Twitter and Facebook feeds. This is 46 social media posts per day. I’ve also shared something on Sociological Imagination daily for almost seven years. I don’t particularly care what anyone else calls it but, as far as I’m concerned, doing it effectively is a skilled activity and ‘curation’ is the term I’ve taken to using.
  • The modern sense of the word ‘curation’ rests on a specific set of institutional arrangements which are themselves relatively recent. The word has a longer history, emerging from the Latin curator (“overseer, manager, guardian“) and what many construe as a misapplication could just as easily be taken as a further shift in its use. Language is dynamic and the anti-‘curation’ rhetoric is an attempt to police its change, albeit not a particularly significant or pernicious one.

Ultimately, I don’t care if people reject this use of the term ‘curation’. I do care if people reject what the term ‘curation’ comes to designate. I don’t dispute it is often used in a vacuous way, but it is not always used this way. It is nebulous and modish but the terms which emerge in relation to socio-cultural transformations often are.

It’s the socio-cultural changes which interest me, the abundance digitalisation is giving rise to and the epistemic fog which emerges as a result. To talk of ‘curation’ is a facet of that conversation and if people want to reject its use, I hope they’ll offer an alternative language for talking about selection from abundance as an institutionalised function within digital capitalism.

I just came across this student essay in which a blog post written by Les Back was attributed to me. This isn’t the first time it’s happened and I’m unsure how to respond to it. The backlist of posts on Sociological Imagination is sprawling by this point, numbering in the low thousands. Most of these were written by me, though they vary between simple sharing of material elsewhere and substantive writing.

Regular guest-writers all have their own accounts so their authorship has been clearly marked. Our settled practice for irregular guest writer has been to have “by x x” in bold at the top of the piece, followed by a biography in bold at the bottom.

Unfortunately, it still indicates the name of the person who published the guest post on the blog, in this case Sadia Habib. Does this attribute authorship clearly enough? I’m starting to wonder if we need a different system. But I’m reluctant to create a new account for each guest author, both because it would add to the time demands of a website I already don’t have enough time for and I really don’t want to have to go back and manually add these for every previous guest post. Any alternative ideas would be much appreciated.

After Michel Foucault died in 1984 at the age of fifty-seven, Pierre Bourdieu wrote a tribute in Le Monde, reflecting on his life and what could be learned from it. Bourdieu attributed to his former colleague at the Collège de France a great consistency in his intellectual work, much more than is often assumed:

The consistency of an intellectual project, and of a way of living the intellectual life. Starting with the desire to break – which explains and excuses some of his famous apothegms on the death of man – to break with the totalizing ambition of what he called the ‘universal intellectual’, often identified with the project of philosophy; but to do so in the sense of escaping the alternative between saying nothing about everything or else everything about nothing.

Political Interventions: Social Sciences and Political Action, Pg 138

To become what Foucault described as the ‘specific intellectual’ required foregoing the temptation to speak on behalf of others. What Bourdieu admired was his ambition to “substitute for the absolutism of the universal intellectual, specific works drawing on actual sources … without abandoning the broadest ambitions of thought” (p. 138). The point was not to counterpoise a neutral expertise, content only to make claims comprehensively licensed by agreement within the community of inquirers, against the sweeping grandiosity which characterised the pronouncements of a figure like Sartre.

The specific intellectual existed in a new space, beyond this dichotomy between the epistemically timid expert working in obscurity and grandiose celebrity forever on an epistemic rampage in the name of truth and justice. As Bourdieu put it, Foucault “always stubbornly rejected the division between intellectual investment and political commitment that is so common and convenient” (p. 138). In this he represented a new kind of intellectual:

For him, the critical vision was applicable first of all to his own practice, and in this respect he was the purest representative of a new kind of intellectual who has no need to mystify himself as to the motives and themes of intellectual acts, nor to foster illusions about their effect, in order to practice them in full knowledge of their cause.

Political Interventions: Social Sciences and Political Action, Pg 139

This entailed a remarkable humility, at least relative to the general intellectuals of the previous generation. His political action, “conducted with passion and rigour, sometimes with a kind of rational fury, owed nothing to the sentiment of possessing ultimate truths and values” (p. 138). He embodied the possibility of commitment without dogmatism, action without certainty. Bourdieu described how Foucault not only “rejected the grand airs of the great moral conscience” but also found them a “favourite object of laughter” (p. 138).

This was a repudiation of the universal intellectual in terms of both politics and intellectualism. The specific intellectual rejected lofty rhetoric of truth and justice for the contingent realities of situated struggles. The specific intellectual rejected generality for specificity, forsaking an assumed right to speak for a methodologically grounded sense of what one can bring to the conversation. But crucially this was done while sustaining commitment, pushing against the boundaries of received wisdom. The specific intellectual remains orientated towards the universal, while always remaining embedded within the specific.

This essay on ‘the cult of cruelty’ has some interesting points to make about the role of what danah boyd calls persistence and searchability in facilitating incivility online. It makes it possible to trawl through someone’s activity, enabling a degree of engagement with choices and representations that would not otherwise be possible:

I’ve been thinking about this a lot lately — the ways in which people exact their hurt. It’s common for people to subtweet about their hate-follows and hate-reads. Nothing distinguishes between the hate cultivated for people we know as opposed to strangers — we’re all fair game for someone else’s vitriol. People have no problem playing armchair therapist; they analyze our lives from a computer screen and then proceed to deliver play-by-play commentary on how we should live our lives based on how they live theirs. Many have come to believe that an online representation of one aspect of our lives is the complete story, the whole of our lives. Who we are, the content of our character, is reduced to what we choose to publish. The choices we make — from what we wear to how we parent and whom we love — should be obvious based on the collective’s personal experience and we’re admonished in text or in forums for “not getting it”. We crave authenticity yet we vilify others for their public missteps, for being human. People talk smack behind our backs to then kiss-kiss, hey, how are you? to our face. People leave hateful comments tearing apart our appearance: Why is she naked in every picture on Instagram…ugh! Who does she think she is? Why does she wear such unflattering clothes? If she didn’t want to hear about how bad she looks she shouldn’t be posting pictures of herself online. Apparently, being public is an open invitation for hate, and it’s frightening that groups exist on the Internet devoted to the care and feeding of that hate.

It also makes it possible to trawl back through the incivility that has been directed at us:

We live in a country that espouses free speech, but many are forced into silence in fear of the hate avalanche. In a private Facebook group, many women talk about not reading the comments of their published articles out of self-preservation. “Don’t read the comments is a constant refrain. Women leave social media because they’re beaten down by people in fear of losing their privilege. A whole group of people has been reduced to a patronizing “snowflake” moniker because of their inability to toughen up, and it’s as if the Internet has become Darwinian in the sense that only those who hate, and those who can withstand and endure that hate, survive. A few years ago, I was the subject of a man’s ire, someone whom I believe I knew (or at least had come into contact with during my agency career, which makes the whole situation that much more unsettling), who wrote about how much he hated me because I stood up for women who had been ridiculed online because of their appearance. Fifteen years ago, a small circle of literary bloggers posted cruel blind items about me and I remember being at work, in front of my computer, reading these posts and my whole body going numb.t

There’s an excellent overview of ‘hate reading’ here:

Underlying all this is a weirdly common human tendency toward “hate-reading.” Call it that for short, at least, because it also includes “hate-listening” and “hate-watching.” In short, many people seem strangely drawn to material that they know, even before they’re exposed to it, will infuriate them. And hate-reading in its purest form involves not just seeking out the aggregated fodder of Media Matters or Newsbusters, but actually going straight to the source: a conservative mainlining Keith Olbermann; a liberal recklessly exposing herself to a Rush Limbaugh monologue.

A lot of us do this, but why? No one knows for sure, but there are a few potential explanations. One is that hate-reading simply makes us feel good by offering up an endless succession of “the emperor has no clothes” moments with regard to our political adversaries. In this view, we specifically seek out the anti-wisdom of whoever appears dumbest and most hateful as a means of bolstering our own sense of righteousness. “If the commentary is dumb enough, it may actually have a boomerang effect in that it reassures us that our opponents aren’t very smart or accurate,” said Mary McNaughton-Cassill, a media psychologist at the University of Texas San Antonio.

There’s a fascinating footnote in Radio Benjamin, loc 395-410, discussing Adorno’s description of Benjamin’s ideas as ‘radioactive’:

The full sentence reads, “Everything which fell under the scrutiny of his words was transformed, as though it had become radioactive,” … Although Adorno’s metaphor uses a different register of boundary crossing, the German radioaktiv, like the English radioactive, shares with Rundfunk, or radio, a connotation of atmospheric spreading, dispersal, and uncontrolled movement across and within borders and lines of containment; the airwaves, like the air or the atmosphere, represent a quasi-invisible scene or medium of transmission. While the German does not directly imply the coincidence of these two (roughly contemporary) modes of radiality, the notion of Benjamin’s gaze, and from there his work, effecting a radioactive transformation suggests the potentially dangerous, if also exciting and new, power of radio and its power to broadcast.

Radioactive ideas effect a transformation. Viral ideas simply pass through. The logic of social media platforms too easily inclines us towards a concern for virality. What we should aim for is to use their affordances to ensure radioactivity, even if this registers much less impressively on a numerical level.

I saw the science journalist Simon Makin give an excellent talk yesterday on how social and natural scientists can make their writing clearer. He offered some excellent tips to this end, including assuming your reader is exactly as intelligent as you are, but has absolutely none of your knowledge. For this reason, clarity isn’t about being simplistic: aim to clarify without simplifying.

What struck me in the discussion of drafting and redrafting was how likely this is to fall by the wayside when rushing. If you’re working to a deadline, particularly when other deadlines immediately follow them, it’s unlikely you’ll invest the time needed to do this. His description of drafting involved careful tinkering, picking and poking at a text in a way which leads to incremental improvement. As opposed to simply trying to get it out of the door so you can move onto the next demand.

This isn’t simply a matter of time. It also reflects the moral psychology of rushing. When we rush, we close down our engagement with the objects of our attention. Things that might have been deeply meaningful to us instead become obstacles to surmount. We simply can’t care about the clarity of our writing in the same way when we’re rushing.

What we are seeing with the growth of ‘fake news’ is perhaps the weaponisation of epistemology. In other words, ‘fake news’ as a construct is becoming a discursive component of our repertoire of contention. Far from entering a post-truth era, we are seeing truth becoming a mobilising device in a new way, encouraging ‘us’ to defend ourselves from ‘them’ predicated on the absolute falsity of their worldview. It’s the playing out in an epistemic register of what Chantal Mouffe, drawing on Carl Schmitt, describes as a friend/enemy distinction. Rather than the political other being an adversary to be struggled against, nonetheless regarded as legitimate, they are cast as an enemy to be destroyed. Rush Limbaugh offered a pure expression of the epistemological logic of the friend/enemy distinction in this 2009 rant:

What this fraud, what the uncovering of this hoax, exposes,” he said, “is the corruption that exists between government and academia and science and the media. Science has been corrupted. We know the media has been corrupted for a long time. Academia has been corrupted. None of what they do is real. It’s all lies!

We live in two universes. One universe is a lie. One universe is an entire lie. Everything run, dominated, and controlled by the left here and around the world is a lie. The other universe is where we are, and that’s where reality reigns supreme and we deal with it. And seldom do these two universes ever overlap.

http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

The origins of this can be understood agnotologically: neo-sophists, with corporate funding, seeking to manufacture doubt where none previously existed. What’s being described as post-truth emerges at the intersection between corporate agnotology, political polarisation and post-democracy. The possibility to weaponise epistemology emerges coterminously with the breakdown of social solidarity. Agnotology contributes to the erosion of shared certainties in cumulative ways. It creates the conditions for what David Roberts calls tribal epistemology:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

What I’m suggesting is that at this point we see epistemology move from being an elite weapon of war to part of the repertoire of contention. Once Trump begins to seriously struggle, how easy is it to imagine Whitehouse statements being dismissed as ‘fake news’ by the grassroots they used this notion to mobilise? How effectively could a nascent leader use this epistemic playbook against those who have brought it into the mainstream? As Roberts points out, this is a cultural tendency which has been present in American politics for quite some time:

That is the classic, some might say naive, view. But there has always been a powerful strain in conservatism (think the John Birch Society) that resists seeing itself as a participant in the game at all. It sees the game itself, its rules and referees, as captured by the other side, operating for the other side’s benefit. Any claim of transpartisan authority is viewed with skepticism, as a kind of ruse or tool through which one tribe seeks to dominate another.

That’s the view Limbaugh and others in right-wing media have consistently articulated. And it has found an increasingly receptive audience. Over time, the right’s base — unlike the left’s fractious and heterogeneous coalition of interest groups — has become increasingly homogeneous (mostly white, non-urban, and Christian) and like-minded (traditionalist, zero-sum values).

http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

The friend/enemy distinction is, for lack of a better term, viral. At least under current conditions. Once people begin to think in these terms, it’s hard to counter it. Not least of all because reluctantly accepting the ‘rules of the game’ inevitably comes to be coded as either giving up or buying in. The reason for this is in part epistemological because tribal epistemology destroys the possibility for syncretism: people can no longer see A and B as elements that can be combined, even if unstable and contested ways. Instead and become an absolute disjunction. One sees the social world in terms that allow for no choice other than to choose between positions. The playing out of this, in the digital capitalism of 2017, rather terrifies me.

The Sociological Review has just published a thought-provoking review of Doug Porpora’s Reconstructing Sociology: The Critical Realist Approach. It gives a lucid, though brief, overview of the book’s core arguments: seven myths which afflict American sociology and seven philosophical counter-points. But what caught my attention was the account of how theoretical work can increase the discipline’s capacity for impact:

Porpora shows how critical realism adjudicates across the plethora of sociological paradigms to create new consistency, which can strengthen the validity and usefulness of our discipline. Imagine governments redefining obesity or poor mental health from medical problems into social problems, to be tackled by wide-ranging interdisciplinary research coordinated through a coherent framework of sociology and covering, for example, the related economics and politics, industries and services, healthcare and urban planning, with studies of the complex everyday life of the groups and individuals concerned.

http://journals.sagepub.com/doi/pdf/10.1177/0038026117701357

The point is overstated but it’s nonetheless important: the internal dissensus of sociology militates against policy impact. The meta-theoretical (dis)orderliness of disciplines underpins the inarguable reality that “economists and psychologists are introduced as self-evidently respected scientists, whereas sociologists, if they are included at all, seem more likely to evoke scepticism than respect”. Rather than theoretical work being a distraction from aspiring to this status, it is in actual fact a condition for it:

One defence of our discipline’s diversity is that its adaptable rich variety can embrace numerous theories, methods and topics. However, variety does not preclude coherence, and coherence does not demand narrow uniformity – like the neoclassical mantras that now monopolise economics. Medicine is a hugely varied discipline yet, fortunately for society’s healthcare, it is unified by powerful common values and theories about causal realities. By contrast, and unfortunately for society’s wellbeing, sociology is split not only by disagreements but, more seriously, by basic contradictions: positivism accepts pristine independent social facts and aims to discover general laws, whereas interpretivism sees only local contingent variety; statistics and experiments are set against ethnography; sociology is variously taken to be value-free, relativist or a moral endeavour.

http://journals.sagepub.com/doi/pdf/10.1177/0038026117701357

Bringing meta-theoretical order to sociology doesn’t entail imposition of a unified paradigm on the discipline. It simply necessitates that we “position its many valuable insights and methods in relation to one another, showing how they connect and interact within larger relations, to be more like a coherent jigsaw puzzle in progress, rather than a heap of pieces”. Can we find unifying principles, providing standards by which we might draw out connections between otherwise isolated outputs of the discipline, which respect the intellectual diversity of the sociological enterprise? Can we begin to agree on standards about what constitutes ‘better’ and ‘worse’ sociology?

The problem is that disciplines most in need of such standards, in order to provide a centripetal mechanism, prove least able to establish them. Calling for such standards doesn’t entail a final resolution of theoretical questions, as if we all have to agree on the same answers in order to move forward as a collective project. But it does entail clarity about why we are asking the questions to which we are offering different answers.

 

In my copy of The Vocation Lectures, edited by David Owen and Tracy B. Strong, the editors helpfully annotate Weber’s description of the occupational realities of the German academic. From pg 2:

German students used to have a Studienbuch, a notebook in which they registered the coruses they were taking in their field. They then had to pay a fixed fee for each course. For staff on a full salary – that is, professors – these tuition fees were a welcome extra. For the unsalaried Privatdozent, these fees were the sole source of income. Science as a Vocation, pg 2. 

Is this where the Uberfication of the University could lead? I find it easy to imagine a Digital Studienbuch, the killer app of educational disrupters, dispersed throughout the university system. Universities would still exist to manage the ‘student experience’, control the academics and provide infrastructure. Perhaps there would still be paid professors to replenish the knowledge system and train the Privatdozent. But the university wouldn’t be the platform, instead it would be a whole series of arenas (with declining influence as the system became embedded), facilitating extraction from a relationship between teacher and taught on the part of a distant technology company.

Weber’s description of the academic career in Germany, “generally based on plutocratic premises”, seems eerily familiar from a contemporary vantage point:

For it is extremely risky for a young scholar without private means to expose himself to the conditions of an academic career. He must be able to survive at least for a number of years without knowing whether he has any prospects of obtaining ta position that will enable him to support himself.  Science as a Vocation, pg 2. 

In the last few years, I’ve become increasingly preoccupied with the notion of ‘the literature’ and how it is invoked by scholars. I’m now rather sceptical of the way in which many people talk about ‘the literature’ and the role it plays in scholarship. It’s not that I don’t think it’s important to identify, engage with and record the existing work that has been done on a topic you’re working on. Rather I’m concerned that the invocation of its necessity serves a disciplinary function when scholarly literature proliferates at the speed which it now does, with an estimated 28,100 journals publishing 2.5 million articles a year. The problems which those who enthusiastically invoke the importance of ‘the literature’ are concerned with, such as perpetual reinvention of the wheel and a failure to recognise relevant work taking place in adjacent fields, have such obviously structural roots that to frame the solution in terms of personal practice seems to accord almost magical powers to the intellectual discipline of individual scholars.

My concern is that invoking ‘the literature’ increasingly functions as a conversation-stopper: it’s a disciplinary action which serves to curtail, though rarely halt, a line of inquiry. If we are inclined, as Richard Rorty once put, “to keep the conversation going” then we need to “protest against attempts to close off conversation by proposals for universal commensuration through the hypostatisation of some privileged set of descriptions” (377). Or in other words, we need to reject the idea that there’s only one way to talk about the topic in question. This is what the invocation of ‘the literature’ does, usually implicitly though sometimes explicitly. It implies a unified body of work which must be the reference point for scholarship on a given topic, even if the intention is to break away from it. In many cases, there’s perhaps no such unity in the first place, with its apparent coherence being underwritten by the most influential figures within the field have talked about ‘the literature’ in a way which performatively brings it into being by justifying the implication that much (potentially relevant) material exists ‘outside’. Judgements of salience aren’t written into the fabric of the knowledge system, they’re suffused with epistemic relativism: made from a particular standpoint, by a person with their own interests, reliant upon their own conceptual apparatus. Instead, behind apparent coherence, we have a complex network of citation cartels, ‘unread and unloved’ publications and influential beneficiaries of Matthew effects.

My point is not to dispute the value of reading and engaging with literature. I only want to situate invocations of ‘the literature’: made by people struggling with the problems of scholarly abundance, in relation to others similarly struggling with these problems. The idea of one definitive point of orientation becomes fetishistic when we all suffer from the vertigo of the accelerated academy. From Sustainable Knowledge by Robert Frodeman, loc 1257:

I feel like I am drowning in knowledge, and the idea of further production is daunting. Libraries and bookstores produce a sense of anxiety: the number of books and journals to read is overwhelming, with tens of thousands more issuing from the presses each day. Moreover, there is no real criterion other than whim for selecting one book or article over another. To dive into one area rather than another becomes a willful act of blindness, when other areas are just as worthwhile and when every topic connects to others in any number of ways. The continual press of new knowledge becomes an invitation to forgetfulness, to lose the forest for the trees.

Under these circumstances, our concern shouldn’t be to ensure everyone pays allegiance to ‘the literature’. We can assume this will continue to grow continuously while everyone feels compelled to write hyperactively, continually churning out publications with more hope that they are counted rather than that they are read. Instead, we should be asking how do we sustain the conversation under these circumstancesWhat kinds of conversations should we be havingWhat purposes do they serve? The well known problems of scholarly publishing mean traditional exchange in journals is becoming progressively less amenable to productive conversations, particularly across boundaries of field and discipline. How do we have conversations which serve, as Nicos Mouzelis puts it, to build bridges?

To be specific, there is little satisfaction with the present status quo where the boundaries between economics, political science, sociology and anthropology have become solid blinkers preventing interdisciplinary studies of social phenomena. But such compartmentalization will not be transcended by the facile and mindless abolition of the existing division of labour between disciplines.

[Instead we need] a painstaking process of theoretical labour that aims at building bridges between the various specializations. Such a strategy does not abolish social science boundaries: it simply aims at transforming them from impregnable bulwarks to transmission belts facilitating interdisciplinary research … what is badly needed today are more systematic efforts towards the creation of a theoretical discourse that would be able to translate the language of one discipline into that of another. Such an interdisciplinary language would not only facilitate communication among the social science disciplines, it would also make it possible to incorporate effectively into the social sciences insights achieved in philosophy, psychoanalysis or semiotics.

Sociological Theory: What went Wrong?: Diagnosis and Remedies, By Nicos Mouzelis

A large part of my enthusiasm for social media comes from the possibilities it offers for having these kinds of conversations. But trying to resolve the problems of the accelerated academy through an invocation of the need for disciplined practice is taking us in the wrong direction.

There’s a powerful counter-argument that can be found here by Patrick Dunleavy, concerning the importance of citation. I want to think carefully about this but my instinct would be to add two additional columns: “how scholarly abundance complicates this role” and “how might this lead us to change practice“.

1-fRk5RTvLhEzfccZL_oDU9g

In the excellent Lower Ed, Tressie McMillan Cottom reflects on the market-orientation of for-profit colleges, tending to seek a continual growth in student numbers. This growth imperative can manifest itself in marketing and recruitment outstripping teaching in institutional spending. From pg 20:

If budgets are moral documents, the fact that some financialized for-profit colleges reportedly spent 22.4 percent of all revenue on marketing, advertising, recruiting, and admissions staffing compared with 17.7 percent of all revenue, on instruction speaks to the morals of financialization

Since the previous Labour government kicked off the radical changes in higher education in the UK, I’ve been interested in the transformation of university marketing. The reforms created a pressure to differentiate but to what extent did that incentivise the growth of marketing and communications at the (potential) expense of investment elsewhere? I hadn’t thought about this issue for a while but it occurs to me that the extreme end of the US for-profit sector represents an exemplar of where the market logic now taking hold in the UK could lead, in so far as that linking financial performance to student numbers increases the structural importance of marketing and communication functions. How this logic plays out in practice depends on many organisational and sectoral factors which I’d like to understand better than I currently do.

How do we characterise the broader change in the sector? Cynics would see it as a distraction from the core functions of the university, with increasing resources being directed to marketing exercises with a possibly uncertain payoff in terms of recruitment. What concerns me is the competitive escalation that can arise in a (relatively) undifferentiated sector where actors compete for scarce attention: how can universities be heard above the din? One way is to accelerate the investment in marketing and communications, expanding into new arenas and further investing in staffing and systems. This is something which those staff are liable applaud, a message that might have particular force when their function is on the ascendency within the university and they can speak with authority gleamed from work outside the sector.

But others would argue that any uncertainty could be overcome by instilling a “marketing culture” in which “return on investment of each activity is carefully weighed up”. This is how Communications Management, ‘the education specialists’ report on findings of a project they were involved on:

Key findings

  • Over two-thirds (69%) of UK marketing directors have seen an increased investment in marketing over the past three years
  • Branding is often still not understood within the higher education sector
  • Modern students are ‘demanding customers’ looking for a response 24/7, meaning that a shift in marketing techniques is crucial
  • Social media must be handled in the right way to avoid “pushy communications” and encroaching on student space
  • Increase in senior strategic marketing appointments in Higher Education Institutions

However survey respondents – a third of the UK’s HE marketing directors – also stated that though budgets still rarely approach those in the private sector, they consider short term funding to be less important than moving to such a “marketing culture,” in which return on investment of each activity is carefully weighed up.

http://www.communicationsmanagement.co.uk/blog/he-marketing-has-become-more-credible/

There are many things to explore here. What particularly interests me is the role of professionalisation (within the sector) and external agencies (from outside the sector) in shaping the new common-sense concerning marketing in the digital university.

I just stumbled across an interesting report, Trends in Higher Education Marketing, Recruitment, and Technology, focusing on the United States context and would welcome any suggestions of reading that focuses more on the UK.

In the last couple of months, I’ve found myself reflecting on irritation. What is it? It’s one of our most recognisable reactions to the world, yet it’s hard to be precise about what it is. Is it an emotion? Is it a state of mind? Is it a reaction to the world? This is the definition which Wikipedia offers:

Annoyance is an unpleasant mental state that is characterized by such effects as irritation and distraction from one’s conscious thinking. It can lead to emotions such as frustration and anger. The property of being easily annoyed is called irritability.

There’s a whole model of the person implicit within this which I’m sceptical of. The idea that mental states manifests themselves in effects with implications for cognition, generated by propensities and generating emotions. It’s an individualised account, even if a multifaceted one, concerning something that’s deeply relational.

The most straight forward definition of irritation would be ‘something which irritates’. In one sense it’s circular, telling us nothing about what irritation is, but it captures the relationality of the reaction. We are irritated by something. We find something irritating. It involves an evaluative relation to the world, but one which, as it were, goes wrong. Far from the smoothly hermeneutic world of the post-Aristotelian philosophers, we have the Goffmanian reality of living together (in a world which frustrates our purposes).

So if irritation is being irritated by something, what is it to be irritated? To be “angered, provoked, or annoyed” or “inflamed or made raw, as a part of the body”. The second definition concerns the resolutely physical but I think it captures something important. We are irritated when we are inflamed by the world, made raw by its recalcitrance. People or circumstances irritates us when they impede our routine movement through the world. Things are not as we expect. We’re forced to calibrate ourselves in relation to the world, pushed back into ourselves confronted with a world that resists us, rather than easily making or way through it. 

We get irritated by others when they do not act as we expect them to. We get irritated by others when they do not act as we think they ought to. In this sense, I would argue that irritation tracks declining social integration: the less agreement there is about how we ought to comport ourselves, the more likely we are to experience irritation in daily life.

What interests me is how we respond to this. If we simply make internal allowances for the fact that others may have different expectations and aspirations to ourselves, it’s easy for the irritation to dissipate. A trivial example: I find it irritating when people talk loudly in the steam room at my gym. But I also recognise that some people go there to socialise, whereas for me it’s a resolutely individual activity. Reminding myself of that fact usually leads the irritation to subside.

On the other hand, if I seek external confirmation for my reaction, it’s unlikely to subside. This is where social media comes in: the imagined interlocutor (what Danny Miller calls the ‘meta best friend’) can serve as a outlet, without the possibility for censure that arises when you share with a concrete individual who’s liable to tell you to stop obsessing and let other people be. It’s even more effective when an agent of this imagined interlocutor, someone who emerges from the background to respond definitively before fading back into it and propping up an imagined consensus, confirms that they too find this behaviour irritating.

Sharing irritation through social networks can facilitate an extreme form of what critical realists call communicative reflexivity. We find confirmation of our immediate reactions in others, rather than further interrogating our reaction internally, leading to a hardening of our reaction and a disposition to act similarly in future. I don’t think digital technology straight forwardly causes a decline in social integration but I do think social networks can amplify personal reactions which entrench the decline by, as it were, depleting the reserves of tolerance we have for others who think about and approach life in a different way to us. This is connected to the paradox of incivility and it’s something I’d like to come back to in greater depth.

In 1988 Pierre Bourdieu chaired a commission reviewing the curriculum at the behest of the minister of national education. The scope of the review was broad, encompassing a revision of subjects taught in order to strengthen the coherence and unity of the curriculum as a whole. In order to inform this work, the commission early on formulated principles to guide their endeavour, each of which were then expanded into more substantive observations concerning their implications.

One of these stood out to me as of great contemporary relevance for the social sciences in the digital university. Their principle considers those “ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”. In other words, what are the elements of educational practice which are integral to it and how can we assure their succesful transmission in training? These include “fundamental ways of thinking” such as “deduction, experiment, and the historical approach, as well as reflective and critical thinking which should always be combined with the foregoing” and “the specific character of the experimental way of thinking”, “a resolute valuation of qualitative reasoning”, a clear recognition of the provisional nature of explanatory models” and “ongoing training in the practical work of research”. It extends this discussion to the technologies used in practice:

Finally, care must be taken to give major place to a whole series of techniques that, despite being tacitly required by all teaching, are rarely the object of methodical transmission: use of dictionaries and abbreviations, rhetoric of communication, establishment of files, creation of an index, use of records and data banks, preparation of a manuscript, documentary research, use of computerised instruments, interpretation of tables and graphs, etc.

Political Interventions: Social Science and Political Action, pg 175

This concern for the “technology of intellectual work” is one from which we could learn a lot, as well as the importance placed upon “rational working methods (such as how to choose between tasks imposed, or to distribute them in time)”. It maps nicely onto what C. Wright Mills described as intellectual craftsmanship. When we consider the technologies of scholarly production – things like notebooks, word processors, index cards, post it notes, print outs, diagrams and marginalia – our interest is in their use-in-intellectual-work. The technologies become something quite specific when bound up in intellectual activity:

But how is this file – which so far must seem to you more like a curious sort of ‘literary’ journal – used in intellectual production? The maintenance of such a file *is* intellectual production. It is a continually growing store of facts and ideas, from the most vague to the most finished.

The Sociological Imagination, pg 199-200

If we recognise this, we overcome the distinction between theory and practice. The distinction between ‘rational working methods’, ‘technology of intellectual work’ and ‘fundamental ways of thinking’ is overcome in scholarly craft. The role of the technology is crucial here: if we suppress or forget the technological, transmission of these practices is abstracted from their application, leaving their practical unfolding to be something which has to be discovered individually and privately (“ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”). But places for discussion of craft in this substantive sense have been the exception rather than the rule within the academy.

Perhaps social media is changing this. It is facilitating a recovery of technology, now finding itself as one of the first things social scientists discuss when they enter into dialogues through social networks and blogs. But it also facilitates what Pat Thompson has described as a feral doctoral pedagogy:

Doctoral researchers can now access a range of websites such as LitReviewHQ, PhD2Published and The Three Month Thesis youtube channel. They can read blogs written by researchers and academic developers e.g. Thesis Whisperer, Doctoral Writing SIG, Explorations of Style, and of course this one. They can synchronously chat on social media about research via general hashtags #phdchat #phdforum and #acwri, or discipline specific hashtags such as #twitterstorians or #socphd. They can buy webinars, coaching and courses in almost all aspects of doctoral research. Doctoral researchers are also themselves increasingly blogging about their own experiences and some are also offering advice to others. Much of this socially mediated DIY activity is international, cross-disciplinary and all day/all night.

https://patthomson.net/2014/06/16/are-we-heading-for-a-diy-phd/Doctoral researchers 

There can be problematic aspects to this. But when it’s valuable, it’s at the level of precisely the unity of thinking, technology and activity which the commission advocated. Social media is helping us recover the technology of intellectual work and it’s an extremely positive development for the social sciences.

It’s a commonplace to recognise that the power of corporate actors is often invoked as a justification for their lenient treatment. After all, if the government takes action against them then everyone will suffer. But I didn’t realise this had been formally expressed, in the notion of collateral consequences put forward by Eric Holder, during the Clinton administration:

One of the factors in determining whether to charge a natural person or a corporation is whether the likely punishment is appropriate given the nature and seriousness of the crime. In the corporate context, prosecutors may take into account the possibly substantial consequences to a corporation’s officers, directors, employees, and shareholders, many of whom may, depending on the size and nature (e.g., publicly vs. closely held) of the corporation and their role in its operations, have played no role in the criminal conduct, have been completely unaware of it, or have been wholly unable to prevent it.

It was a superficially plausible doctrine, offering a needed justification for refusing to prosecute cases that would obviously do more harm than good. Its point was not inaction but alternatives. However as Matt Taibi puts it The Divide, in reality this offered a get out of jail free card for the corporations which came later, by the time he served under Obama.

In his On the Ontological Mystery, Gabriel Marcel describes the experience of “an irresistible appeal which overturns the habitual perspectives just as a gust of wind might tumble down the panels of a stage set”. He is talking of a chance meeting with a stranger, but the image is a powerful one which characterises many episodes of what I think of as personal morphogenesis. Fateful moments, turning points and critical junctures often involve profound changes in the scenery of our lives. Things which we thought were solid fall apart. Suddenly what was fixed is revealed to be malleable. We realise that the background to our lives is not immutable, rather it was simply what had faded into the background. It is a sudden, dramatic and painful overturning of the strangely subtle process through which we ‘die a thousand deaths’, to use Roberto Unger’s phrase, as congealing layers of habit obscure our own agency.

I’m fascinated by these fateful moments because they are central to understanding agency. Their mysterious dynamics hold the secrets of our dual nature, free but always constrained, capable of choice but driven by automaticity. To adequately address the ontology of such fateful moments entails that we are careful about their epistemology. The mere fact of a moment being deemed fateful by a subject does not make it so. The poetics of ‘turning points’ often blind us to the mundane realities that preceded them, as the dramatic moment when the sense of our life ‘tumbles down like the panels of a stage set’ only came about because of many unnoticed gusts of wind that gradually eroded the foundations of this experienced order. 

It might sound voluntaristic to be concerned with these sudden dizzying encounters with freedom, but it’s precisely in such moments when we can be face to face with the recalcitrance of reality. Best laid plans go awry, people and things resist our demands and the order we sought to impose on the world proves to be a hope, rather than a blue print. An adequate phenomenology of ‘fateful moments’ must be orientated to the past, as well as the future. What renders these moments fateful is being torn between the two, rather than habitually chugging along as past investments propel us through present circumstances and into an expected future.

Investigating fateful moments can help elucidate this strained character of agency, forever caught between past and future, blind to the full range of opportunities and constraints confronted in the present. But fateful moments aren’t reducible to agency. They are something relational, multifaceted and dynamic. For this reason, they can also be profoundly macro-sociological in origin. Reading Sarah Bakewell’s wonderful At the Existential Cafe, which incidentally introduced me to the Marcel text I opened this discussion with, offers a wonderful account of how the grand sweep of historical events can reshape the lives of those caught within them. At risk of stating the obvious, wars are amongst many other things a terrifying social machinery for generating fateful moments. A concern for fateful moments does not represent a personalist myopia, but rather an ambition to stitch together the tapestry of social explanation from the most intimate aspects of personal experience through to the most dramatic instances of systemic change.

In an interesting chapter Frederic Vandenberghe explores the role of the individual in Bourdieu’s Sociology, as well as the critiques which Margaret Archer and Bernard Lahire make of it. His intention is to respond to a sociology he sees as hegemonic by developing a post-Bourdieusian theory of the social world that is not anti-Bourdieusian. His project, as I understand it, derives from a sense that Bourdieu’s sheer influence is distortive, polarising debate in a way that steers it away from concern with better or worse sociology to more or less accurate interpretations of the master.

How accurate is Vandenberghe’s account of Bourdieu’s influence? His 536,230 citations certainly offer quantitative evidence of this influence, but the claim that Bourdieu’s sociology is hegemonic seems more contentious to me. Nonetheless, he’s surely correct that the combination of its influence, diffusion and systematicity make it a force to be reckoned with. Or rather a force that must be reckoned with, a reference point that is difficult, if not impossible, to ignore.

Both Archer and Lahire were deeply influenced by Bourdieu. My interview with her in here explores his influence on her thinking, as well as her time working with him as a post-doc in the early 60s. While, as Vandenberghe puts it, Lahire’s sociology is so “thoroughly Bourdieusian that he could well be considered the heterodox successor to the master (Loïc Wacquant being the official one)”. Both have worked at the intersection of sociology and psychology in recent years, with Lahire taking inspiration from Durkheim while Archer has looked to American pragmatism for intellectual resources. Vandenberghe argues that their work represents a social psychology of a new kind: orientated to “how groups, large and small, behave within in the individual mind” rather than “how individuals behave in small groups”. Their shared unit of analysis is the life, understood biographically, as a movement through the world constituted through choices. But the dissimilarity arises because Archer’s focus concerns how future projects shape present actions, whereas Lahire explains the present and the future in terms of past “dispositions and their activation in particular contexts in the present”. As he puts it, “His actors are pushed by their dispositions, while hers are pulled forward by their projects”.

From Vandenberghe’s exposition, it seems that Lahire’s critique of the concept of habitus resembles Archer’s in some ways: he “accuses Bourdieu of abusively generalising a particular model that only holds in exceptional situation (such as traditional societies and total institutions)”. But he make the same critique of the concept of field, “accusing Bourdieu of transforming a regional model into a general theory of the social world”. Instead he offers an account of the individual as “like a crumpled sheet or a rumpled rag”, with social space in all its dimensions unevenly folded up inside of them. Not unlike Archer, he sees what Bourdieu regarded as a marginal condition (the cleavage of the habitus) to instead be a general characteristic, at least under certain social and cultural conditions.

His exposition of Archer is excellent, rather unsurprisingly as one of the theorists most deeply conversant with her body of work as a whole. The slight exception to this is the latent teleology he reads into the concept of reflexivity, ignoring the extent to which we all practice each of these modes to varying degrees in everyday life. Oddly, he offers precisely this recognition as a suggestion of how her account of reflexivity can be improved, with his accusation of a “kind of disguised personality test” being an incisive critique of how her work on reflexivity is chronically misread, even by its advocates.

I agree with him however that Archer downplays the role of cultural structures, seeing them as something which “structures the situation from outside, not from inside in the form of subconscious schemes of perception, judgement and interpretation that prestructure the world and canalize action, excluding some options even before the actor becomes conscious of the situation”. His suggestion that we investigate empirically how the relative balance of reflexivity and disposition operates in particular action situations is one I find extremely plausible, perhaps demanding that we need methods other than the interview, as well as overcoming the relative neglect of situated embodied action within Archer’s work.

It’s an interesting chapter which I highly recommend. It’s left me wanting to return to my PhD, as well as investigating Lahire in greater depth. It strikes me that I’ve actually done something akin to what Vandenberghe advocates, synthesising Archer and Lahire, without actually having read Lahire. My curiosity demands that I establish whether or not this is the case.