From The Refusal of Work, by David Frayne, pg 199:

What do you do? After ‘What is your name?’ and possibly ‘Where are you from?’ this is one of the first questions that strangers usually pose to one another, with convention dictating that this question is almost always an enquiry into our employment situation. ‘What do you do?’ means ‘What job do you perform?’ If we are being generous, we might say that the posing of this question is innocent enough. It represents a social custom, or an attempt to elicit information that might bring relief to the interaction by providing it with some context, pushing the conversation towards some common ground. If we are being more critical, we might view this question of ‘What do you do?’ as a naked attempt to measure the status of the other. ‘What do you do?’ means ‘Summarise in a sentence what you contribute to this world, and I will judge you on the basis of your response.’ Or ‘Are you a person worth knowing?’

I’m enjoying The Refusal of Work by David Frayne at the moment. He asks some fundamental questions about the meaning of work in contemporary society. From pg 12:

What is so great about work that sees society constantly trying to create more of it? Why, at the pinnacle of society’s productive development, is there still thought to be a need for everybody to work for most of the time? What is work for, and what else could we be doing in the future, were we no longer cornered into spending most of our time working?

I’m interested in the role that this intensification of work plays in circumscribing the lived experience of future possibilities. When people are triaging, they suffer from a death of the imagination: it becomes much more difficult to address Frayne’s question: “what else could we be doing in the future, were we no longer cornered into spending most of our time working?”

A really fascinating read on Harvard Business Review:

We found that through Uber’s app design and deployment, the company produces what many reasonable observers would define as a managed labor force. Drivers have the freedom to log in or log out of work at will, but once they’re online, their activities on the platform are heavily monitored. The platform redistributes management functions to semiautomated and algorithmic systems, as well as to consumers.

Algorithmic management, however, can create a deal of ambiguity around what is expected of workers — and who is really in charge. Uber’s neutral branding as an intermediary between supply (drivers) and demand (passengers) belies the important employment structures and hierarchies that emerge through its software platform.

I gave a lecture earlier this week about the cultural politics of automation and how this might shape the emergence of mass automation as a primarily structural reality.  I wish I’d seen this Pew poll when I was preparing the lecture:


This sense of the inexorability of mass automation is deeply worrying. It’s possible that people might begin to see the issue differently when face-to-face with the prospect of their own technologically induced redundancy. But it’s also possible that the mechanisms I outlined in the lecture – anticipatory acceleration in the face of contracting opportunities within an occupational field, coupled with an increasing fetishisation of ‘talent’ and corresponding denigration of ‘failures’ – might work to preclude any kind of collective resistance to mass automation or agitation for policy designed to mitigate the damage to people’s lives.


The rise of the robots is a recurrent theme of popular culture. Robots are often seen as a threat, heralding the prospect of human beings being replaced by their creations, perhaps to the extent of being deemed useless by them and attacked. Underlying this fear is the reality of automation: technology being more adept at particular tasks and so replacing human beings for this purpose. But automation isn’t new. All manner of what we now consider mundane automated tasks were once undertaken by hand, representing whole categories of employment which have now wholly or largely vanished. For instance our phone system no longer relies on switchboard operators and withdrawal of money no longer necessitates interaction with a bank clerk. But technological change has often produced new jobs to replace those that have been lost. Human beings are adaptable. As a 1965 NASA report put it, “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labour”. More often than not, technology has been used alongside human beings to improve their productivity, sometimes as a skilled tool and sometimes as a tool for deskilling, taking a skilled task and breaking it into component elements. In fact, some might argue that the history of scientific management, analysing and dictating workflows to improve economic efficiency, somewhat resembles an attempt to turn human beings into machines: replacing their skilled and situational responses with a pattern imposed by outside experts.

But many are arguing that we are on the cusp of a turning point in automation. This is not a matter of hyper-intelligent robots replicating human capacities but rather of quite specific technological advances facilitating entirely new kinds of automation: what Jerry Kaplan describes as synthetic intellects and forged labourers. The first relies on advances in machine learning and cloud computing to process unprecedented quantities of data at speed, facilitating the rapid development of accumulated expertise in a particular sphere without strictly speaking ‘understanding’ it: the machine can learn from a much greater amount of data than was previously the case and the computational challenge involved in doing so can be distributed through the cloud. The second relies on developments in sensor technology to facilitate much more sophisticated engagements with the environment than has ever previously been possible, moving beyond highly specified tasks under strictly defined circumstances, allowing for entirely new work place designs built around the needs of the robot rather than the humans working alongside it. Rather than organising warehouses in a manner comprehensible to human packers, Amazon warehouses can now order their stock in a manner that seems chaotic to workers because items are located on the basis of imperceptible connections between them (e.g. sales data for this region shows that A and B are frequently shipped together) but allow the robot packers to work ever more efficiently:

Screen Shot 2016-03-05 at 14.21.18

One of the most radical developments in the near future is likely to be self-driving cars, such as those currently under development at Google. As Kaplan notes, vehicle accidents cause 4 million injuries and cost over $870 billion annually in the United States alone. Seen in this light, the total switch to self-driving cars looks like common sense. But it will also destroy whole categories of existing jobs upon which millions of people depend, including those such as taxi driver which have traditionally been a reliable open route into the work force for new immigrants in many countries. However this has still up till recently be seen as a matter of automating routine jobs. What has seemingly provoked much of the controversy in recent years is the newfound recognition that what are seen to be skilled jobs will themselves be under threat. The most interesting example of this is Narrative Science’s innovative tools to automatically generate stories from structured datasets. Starting with formulaic business stories, they have since moved into sports stories and make a disturbingly convincing case that with enough sophistication about underlying narrative structures, this process can work for any appropriately structured dataset.


This might not lead to all journalists losing their jobs but it certainly does suggest the possibility that much of the routine work of journalism might be automated. On the one hand, this could be seen as unproblematic given the financial challenges newspapers and magazines face at present: if it can be done cheaper, couldn’t this help secure journalism’s future? On the other hand, it’s difficult to see how the journalistic environment won’t suffer if routine entry level jobs are eliminated. Where will the stars of the future, those with sufficient individual expertise to resist automation, get their start? How will they become known? These are questions which have been raised across range of fields even prior to automation, as competitive pressures advantage those with sufficient financial resources and willingness to work for free. But the prospect of automation is likely to intensify this, ratcheting up the already endemic sense of uncertainty under which much of the workforce already labours.

How are people responding to the uncertainty facing occupational futures? Though the basis for his claims is somewhat unclear, I find Zygmunt Bauman’s analysis of this intuitively plausible. He suggests that the spectre of exclusion, the possibility that we won’t make the cut and we will be cast out without hope or prospects, animates a profound need for recognition. We ‘recast ourselves as commodities’ in order to cope under these circumstances, desperately seeking visibility in order to better sell ourselves against a backdrop in which, as the economist Tyler Cowen puts it, average is over. Economic polarisation is becoming the defining feature of the contemporary economy. As Cowen puts it, writing about the United States, “Demand is rising for low-pay, low-skill jobs, and it is rising for high-pay, high-skill jobs, including tech and managerial jobs, but pay is not rising for the jobs in between” (pg 40).

What Bauman is offering constitutes a speculative social psychology of how people respond to this condition of profound polarisation. If we’re aware that opportunities are contracting and that our future security is uncertain then these fears find FT_15.02.06_europeanMillSuccessexpression in a competitive scramble to ensure we are recognised and valued: as commodities, if not necessarily as persons. He suggests that much social media behaviour can be seen as an expression of this impulse (though many, including myself, would object to generalisations about how people in general behave across social media in general). But I nonetheless think it identifies something interesting about the fame-seeking cultures that can be found across many platforms, even if there’s a tendency to “publicize successful outliers to propagate the illusion” in a way that serves the self-interest of platforms. The growing tendency toFT_15.02.06_europeanMillWork be fascinated with wealthy Vloggers, in virtue of the fact they are wealthy through vlogging, embodies something of this. Does the fact some people have seemingly secured their own future through social media visibility help propagate the sense that this is a viable strategy for many others? By definition there can only be a handful of celebrities on any platform. What we do know is how many young people see their future as determined by forces outside of their control, insusceptible to change through the avenues of work and education that older generations claim is a pathway to success.

Could fame culture thrive alongside this fatalism? People pray that they will ‘be discovered’ while also despairing about a future that seems beyond their control? What Furlong and Cartmel call the ‘epistemological fallacy of late modernity’ is a recipe for anxiety: the precise way in which opportunities constrain individuals has become more obscure than ever in a culture of competitive individualism which increasingly lacks the cultural resources to make sense of classed experience, while individuals are made to feel responsible for their biographical outcomes as pure expressions of their own talent and exertion.

Talent becomes fetishised under these circumstances. We can see this when Boris Johnson mocks the 16% ‘of our species’ with an IQ below 85 and praises the 2% with an IQ over 130. We can see it in the way that Donald Trump repeatedly proclaims that “I’m, like, a really smart person”, while condemning his rivals as not smart, without explaining what this really means or how it qualifies him for office. It’s why the popularisation of developmental neuroscience is so sinister: it heralds a social imaginary in which ‘talent’ can be understood as hardwired, while still acknowledging that circumstances plays a role in how these characteristics are inscribed in the human i.e. it justifies present arrangements while licensing punitive interventions against parents who fail to raise their children in a way conducive to the genesis of talent. Looking to the more ridiculous forms this fetishisation of talent takes can help us critique the more insidious and sophisticated variants that are increasingly dominant. This case can be made in particular about the most popular forms of self-help in recent years:

And this is the most remarkable feat of The Secret: its ability to defend inequality. While the 99 per cent has become a worldwide slogan questioning the concentration of wealth, the author of The Secret offers an alternative view of the situation. ‘Why do you think that 1 percent of the population earns around 97 percent of all the money that’s being earned?’, Bob Proctors is asked rhetorically in the book, answering, ‘People who have drawn wealth into their lived used The Secret, whether consciously or unconsciously. They think thoughts of abundance and wealth, and they do not allow any contradictory thoughts to take root in their mind.

The Wellness Syndrome, Carl Cederstrom & Andre Spicer, pg 80

What makes The Secret so interesting is how nakedly metaphysical it is. The affluent do it ‘unconsciously’ and that is why they are affluent. Those who are not nonetheless have the choice to do it. If they do it correctly then they too will become affluent. If they do not then they deserve their fate. This bizarre concept of “The Secret” fascinates me because it’s easy to see how it holds the whole picture together: this latent faculty, to which we all have access, allows us to succeed. Some people are disposed to access it already (inherited privilege) but this places no restriction on others. We can all access this latent ability to be a success if only we choose to do so and then use it in the proper way. Or to put it more mundanely: “there are plenty of good jobs out there for those who want them, it’s just that people don’t try”. The idea that differential outcomes can be explained away in terms of the moral failings of individuals means we take the existing state of society and the economy for granted: there aren’t questions to be asked about social structures, just more failings to be condemned in individuals. This is something

These are trends we can already see in contemporary society. My depressing question: how might they intensify under circumstances of widespread structural redundancy? What if the low-wage, low-skill jobs into economic polarisation is forcing much of the workforce rapidly begin to vanish? What will happen if 47% of jobs are eventually automated? It’s possible many new categories of job might open up but, as suggested earlier, there are good reasons to be sceptical about the scale and speed of this replacement. Will those who can’t find work be seen as unfortunate victims of unavoidable change or as moral failures placing a burden on the ‘wealth creators’? Will they mobilise themselves to collectively struggle for the transformation of a social order incapable of providing them access to the good life or will they be mobilised by others through potentially surreptitious means to serve the ends of those who are already wealthy and powerful? Popular culture provides us with many dystopian representations of what this might look like. The graphic novel Lazarus paints a bleak picture of a world in which nation states have been superseded by corporations and a small number of families dominate the planet. There are those who serve the families and those who are surplus to their needs, with the former group being composed of those who have been ‘elevated’ from the latter category. The possibility of freedom from insecurity and struggle represents a powerful tool to keep the population in line, coupled with private militaries to enforce this order through violence:


There are many other dystopian representations of a possible future in which there is little work or security for the majority of the population. However there are also popular representations of worlds in which scarcity has been conquered and everyone’s needs are met: ones in which some people still strive for work and adventure because of the intrinsic rewards that these provide. These are only representations but they are the resources we inevitably draw upon, deliberately or otherwise, when imagining the PRluddites1possibilities ahead of us for how these trends will unfold. Both of these categories however tie utopian or dystopian outcomes to the technology itself: seeing it as either liberating us or rendering us redundant. How does this suppress the role of politics – i.e. the tension and conflict between groups with different interests – in determining the outcome of these processes? Does it also preclude the possibility that our future might see a turn against technology, as something deemed to be responsible for systematic disenfranchisement? Would a neo-luddite movement be possible? Or are people too wedded to their devices? Would powerful interests allow such a movement, given the centrality of technology firms to the contemporary economy and the new possibilities for surveillance and control which the internet opens up? These are all open questions but they’re ones which sociology can help us think through in a systematic way, even if not necessarily answer.

Much deserved Guardian coverage of the weird phenomenon that is the internet cat video festival. What grips me about things like this is not the fact that people are trying to make money from their cats, but rather that many others people are trying and failing to make money from their cats. Not unlike the aspiring professional pick up artists, though you’ll have to read this brilliant paper to see what I’m getting at.

I’m increasingly convinced that a tendency to publicize successful outliers to propagate the illusion‘ can be seen across the web, as a few people who make a living within a novel field wilfully co-operate with platform providers to promulgate the notion that other people could do this too. The result is inevitably a rather off-putting stampede of aspirants which must be read against the background of contracting structures of opportunity which can be seen across more established sectors within an increasingly low-wage and precarious economy.

There’s an interesting BBC programme about the rise of Vloggers which has left me thinking about this: It’s very descriptive but it’s interesting to see these people asked about what they’re doing now and how it relates to what they were doing previously.

This extremely useful little book introduced me to this consideration recently. It’s very important to my developing argument about the intensification of work: the escalation of demands placed upon workers, their mediation through the internal conversations of individual workers and its implications for how they exercise their reflexivity in the workplace. Here’s the data I’ve just been looking at about trends in the United States:




A really interesting Pew study on what seems likely to become a growing source of digital inequality. The Internet is becoming more important than ever to much job searching:

A majority of U.S. adults (54%) have gone online to look for job information, 45% have applied for a job online, and job-seeking Americans are just as likely to have turned to the internet during their most recent employment search as to their personal or professional network


Around 1/3 of Americans have looked for a new job in the past 2 years and 79% of these have consulted online resources and information. But the confidence and capacity to do this is far from evenly distributed:





This is an interesting development: there’s clearly an interest served by the announcement but the potential success of this positioning could prove influential if legal challenges to contract labour gain some traction:

Shift, an on-demand startup that helps people buy and sell cars, is looking to make employees out of its contract-based labor force. Almost 100 California-based “car enthusiasts” — what Shift calls the people who do price checks, coordinate inspections, facilitate test drives, and otherwise help with the transaction — are being given the opportunity to join the company as employees, starting December 1.

Use of contract labor by tech companies is a hot topic. Companies including Postmates, Washio, Handy, Lyft and Uber are currently being sued by workers who say they should be receiving the benefits and compensation befitting employees.

Not all on-demand companies use independent contractors — Munchery, WashUp, Alfred and Managed by Q, to name a few, use employees. Still others, such as Shyp, Instacart and Sprig, have announced intentions to transition their workforce from contract-based to employee status.

Another BuzzFeed article gives an excellent overview of a legal challenge currently being mounted by four amazon staff over their status as contract labour:

Amazon contractors — drivers who worked for Prime Now, Amazon’s two-hour local delivery service, and were hired through a third-party contracting company — have proposed a lawsuit against the company, accusing Amazon of misclassifying them as contractors.

The drivers, their lawyer Beth Ross argues, should be classified as employees for a number of reasons, including that they work shifts rather than on a gig basis, have to wear shirts and hats with company branding, and are told by the company where to be and when. In addition, the workers are concerned that the cost of gas, tolls, and other incidental expenditures makes their total income below the legal minimum wage in California. (Amazon advertises that the drivers will make around $20 an hour; the minimum wage in California, where these workers live, is $9.)

Do you have people working for you? How do you conceive of the relationship? Are they junior colleagues for whom you provide steering in an otherwise basically collective project? Or are they subordinates for whom you provide direction and oversight as a line manager? How aware are you of their pay and conditions? How aware are you of the rights and responsibilities defined in their contract of employment? Do you see these issues as relevant to your working relationship? Recent events at Warwick have left me curious about how academic staff would respond to these questions and how, if at all, this varies between departments.

Earlier today I started reading Blacklisted, an account of the extensive blacklisting in the construction industry that was exposed by an investigation by the Information Commissioner. For those unfamiliar with the case:

In 2009, the Information Commissioner’s Office (ICO) exposed details of a large-scale surveillance operation run by a company called The Consulting Association.  This company collated files on thousands of construction workers, as well as academics and journalists, and sold the information to 44 construction companies.  The Director of The Consulting Association, Ian Kerr, was fined just £5,000 and all 44 companies escaped without penalty or punishment.

Many of these workers had their lives ruined, unable to find employment in the construction industry, blacklisted for their trade union activities or for raising health and safety concerns.

The thought I can’t shake is how archaic the technology used to implement this blacklist was. A man in an office effectively kept a ring binder with names, updated via tips from aggrieved employers supplemented by newspaper cuttings from the radical press.

I can’t be the only person who’s had the idea of algorithmic blacklisting: using social media data and natural language processing to flag up ‘problematic’ workers in order to place them on a blacklist i.e. replacing newspaper cuttings with big data.  How would we even know if this technology was implemented?

I see the ‘self-funded studentship’ as a sign of everything that is wrong with higher education. Take this example I just encountered. It is for a PhD student to work on a fully developed project. I’ve always understood the funding attached to such an arrangement as a quid pro quo: intellectual autonomy is sacrificed in return for the guarantee of funding. In some cases, it might be actively desirable to work as part of an established project and my impression has been the mentoring relationship can be more active and hands-on in an arrangement of this sort. In other words, it’s completely fine if you’re offering the student funding. Unfortunately such ‘self-funded studentships’ do not do this:

This project is offered on a self-funding basis. It is open to applicants with funding or those applying to funding sources.  Details of tuition fees can be found at A bench fee is also payable on top of the tuition fee to cover specialist equipment or laboratory costs required for the research.  The amount charged annually will vary considerably depending on the nature of the project and applicants should contact the primary supervisor for further information about the fee associated with the project.

Not only are tuition fees still required but there’s an additional ‘bench fee’. The student is in effect being asked to pay for the opportunity to be an unpaid research assistant for three years. It’s like auctioning off internships to the highest bidder but with the selection being made on the basis of quality & suitability (within the cohort of those able to finance this) rather than on a crudely financial basis.

I can understand why this would be attractive to the academic: you get a research assistant working for you for three years on your project without the hassle of winning funding to support them. The structural constraint is passed downards through a hierarchy: it enables academics to pursue their projects in a difficult environment by passing these costs on to those aspiring to one day occupy the academic’s place within the occupational hierarchy.

But how on earth does this seem ok to people? Practices like this are going to proliferate over the coming years, as individually rational (though morally condemnable) responses to a structural squeeze on funding. If I’m right that they’re only going to grow with time then do we need to start pre-emptively campaigning to prohibit these arrangements? My fear is that much like ‘research internships’, not only do these reward the already privileged who are able to work for free, they’re likely to undermine the assumption that this work should be paid. It becomes much easier to justify it once the practice becomes widespread.

There are 21 self-funded studentships currently listed on at present. It seems urgent to me that we track how these and associated phenomena are spreading as a preliminary to opposing them. I’m quite busy though – perhaps I should recruit an intern to help me with the project. After all, everyone else seems to be doing it so it must be ok.

We’ve recently seen an emerging discourse of the ‘second machine age’ considering the potential implications of advances in robots and computational technologies for employment. In a recent London Review of Books essay, John Lanchester offers an insightful overview of this issue:

What if that’s where we are, and – to use the shorthand phrase relished by economists and futurists – ‘robots are going to eat all the jobs’? A thorough, considered and disconcerting study of that possibility was undertaken by two Oxford economists, Carl Benedikt Frey and Michael Osborne, in a paper from 2013 called ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’​4 They came up with some new mathematical and statistical techniques to calculate the likely impact of technological change on a sweeping range of 702 occupations, from podiatrists to tour guides, animal trainers to personal finance advisers and floor sanders. It ranks them, from 1 (you’ll be fine) to 702 (best start shining up the CV). In case you’re wondering, here are the top five occupations:

1. Recreational Therapists
2. First-Line Supervisors of Mechanics, Installers and Repairers
3. Emergency Management Directors
4. Mental Health and Substance Abuse Social Workers
5. Audiologists

And here are the bottom five:

698. Insurance Underwriters
699. Mathematical Technicians
700. Sewers, Hand
701. Title Examiners, Abstractors and Searchers
702. Telemarketers

The theme is clear: human-to-human interaction and judgment is in demand, routine tasks are not. Some of the judgments seem odd: is it really the case that choreographers come in at 13, ahead of physicians and surgeons at 15, and a long way ahead of, say, anthropologists and archaeologists at 39, not to mention writers at 123 and editors at 140? Nonetheless, the paper’s methodology is sober and it makes clear just how far-ranging the impact of technological change is in white as well as blue-collar work.

Leaving aside any specific problems we can identify with the methodology here, it nonetheless raises important questions about the future of capitalism. What makes Lanchester’s article so commendable is his insistence that this tendency is not inexorable and he calls for what might be described as a repoliticisation of an issue that has been (self-interestedly?) rendered as narrowly technical. The hyper-capitalist dystopia of vast unemployment predicated upon robotics is something which has haunted popular culture, coming to be represented in everything from 2000AD to the more recent films of Neill Blomkamp:

What Lanchester suggests is that we need to move beyond dsytopic imagery in order to flesh out our heretofore entirely speculative understanding of what might happen if 47% of jobs are lost in two decades. We also need to recover the latent promise that robotics and computation might prove emancipatory, creating new possibilities for human flourishing in a world liberated from mental and physical drudgery:

A great deal of modern economic discourse takes it as axiomatic that economic forces are the only ones that matter. This idea has bled into politics too, at least in the Western world: economic forces have been awarded the status of inexorable truths. The idea that a wave of economic change is so disruptive to the social order that a society might rebel against it – that has, it seems, disappeared from the realms of the possible. But the disappearance of 47 per cent of jobs in two decades (as per Frey and Osborne) must be right on the edge of what a society can bear, not so much because of that 47 per cent, as because of the timeframe. Jobs do go away; it’s happened many times. For jobs to go away with that speed, however, is a new thing, and the search for historical precedents, for examples from which we can learn, won’t take us far. How would this speed of job disappearance, combined with extensive deflation, play out? The truth is nobody knows. In the absence of any template or precedent, the idea that the economic process will just roll ahead like a juggernaut, unopposed by any social or political counter-forces, is a stretch. The robots will only eat all the jobs if we decide to let them.

It’s also worth noting what isn’t being said about this robotified future. The scenario we’re given – the one being made to feel inevitable – is of a hyper-capitalist dystopia. There’s capital, doing better than ever; the robots, doing all the work; and the great mass of humanity, doing not much, but having fun playing with its gadgets. (Though if there’s no work, there are going to be questions about who can afford to buy the gadgets.) There is a possible alternative, however, in which ownership and control of robots is disconnected from capital in its current form. The robots liberate most of humanity from work, and everybody benefits from the proceeds: we don’t have to work in factories or go down mines or clean toilets or drive long-distance lorries, but we can choreograph and weave and garden and tell stories and invent things and set about creating a new universe of wants. This would be the world of unlimited wants described by economics, but with a distinction between the wants satisfied by humans and the work done by our machines. It seems to me that the only way that world would work is with alternative forms of ownership. The reason, the only reason, for thinking this better world is possible is that the dystopian future of capitalism-plus-robots may prove just too grim to be politically viable. This alternative future would be the kind of world dreamed of by William Morris, full of humans engaged in meaningful and sanely remunerated labour. Except with added robots. It says a lot about the current moment that as we stand facing a future which might resemble either a hyper-capitalist dystopia or a socialist paradise, the second option doesn’t get a mention.

This all raises the question of the place of sociology in a second machine age. It seems to me that we are strongly positioned to make a unique contribution to our understanding of possible futures (e.g. what might happen if 47% of jobs are lost in two decades) as well as, alongside other social sciences, fleshing out our knowledge about the conjunction of factors which might lead to each such possible future. This will involve going beyond the traditional repertories of scholarship and communication. It might benefit from the embrace of design fiction:

Design fiction is a term first coined by Julian Bleecker and popularized by SF author Bruce Sterling, who describes it as “the deliberate use of diegetic prototypes to suspend disbelief about change.” and that it “attacks the status quo and suggests clear ways in which life might become different.”

Design fiction isn’t science fiction, it’s not just a telling of stories in the future or trying to make predictions of the future, instead it is a way of trying to envision and interrogate possible futures based on research data, current trends, and/or technologies. Originally, primarily used by product designers as a cheap alternative to prototyping new products, it has found traction as a critical tool allowing us to see through the fog of hype and digital evangelism.

There’s an example of the form such a future-orientated sociology might take in a recent event with Deborah Lupton and John Urry at the Hawke Research Institute in Australia. In Catastrophic Futures they addressed the question of what kind of future we can expect by 2050, as well as some of the methodological and political questions posed for sociology by such an investigation. There’s a podcast available here and it’s really worth a listen. It also suggests a need for sociological thinkers to help ‘join the dots’: linking together what we know across a range of fields into broader synthetic accounts that accurately convey conceptually opaque aspects of our present situation and highlight potential trajectories. John Urry’s recent book Offshoring is a good example of what this might look like:

It seems obvious to me that sociology could make an important contribution to the repoliticisation that John Lanchester calls for but it’s not obvious to me that it will. Not least of all because the audit driven logic of the university mitigates against forms of sociological inquiry which by their nature would both transcend specialisation and include a speculative component that resists codification in ‘internationally excellent’ journal articles.

But if the worst does happen, if we see a catastrophic slide into hyper-capitalist dystopia driven by these technological advances, what place would there be for sociology then? It occurs to me that much of sociology could probably thrive quite well in a world where, as Lanchester puts it, “human-to-human interaction and judgment is in demand”: the obvious risk is that it would be an instrumentalised sociology, robbed of any critical impulse, with sociologists reduced to technicians of human capital attendant upon the social relation of those still in employment and directed towards the problems caused by those condemned to perpetual unemployment. There might be a place for corporate ethnography but not for critique, for bounded theorising but not for expansive theory. Contrary to John Urry and Deborah Lupton in the aforementioned podcast, I think we should begin to talk about dsytopias while we still can.

This powerful essay by Maria Warner in the LRB echoes what I was trying to say yesterday about the perils of passion:

A university is a place where ideas are meant to be freely explored, where independence of thought and the Western ideals of democratic liberty are enshrined. Yet at the same time as we congratulate ourselves on our freedom of expression, we have a situation in which a lecturer cannot speak her mind, universities bring in the police to deal with campus protests, and graduate students cannot write publicly about what is happening (one of my students was told by management to take down the questions she raised on Facebook). Gagging orders may not even be necessary. Silence issues from different causes: from fear, insecurity, precarious social conditions and shame. It is the shame of the battered wife that allows her husband to count on her silence. I recognise, for example, the compunction in the words of Rosalind Gill in her fine article ‘Breaking the Silence: The Hidden Injuries of the Neo-Liberal University’.​5 She nearly didn’t write the piece, she says, because she felt that ‘pointing to some of the “injuries” of British academic life had a somewhat obscene quality to it given our enormous privileges relative to most people in most of the world’. She felt ashamed to be complaining about conditions at work because she was in it ‘for the satisfaction, not the money’. The managers count on that feeling – in others, not themselves. Gill recognises that the very sense of specialness that still attaches to the idea of being a teacher or a professor – especially for women, after our late acceptance into the profession and our erratic and precarious progress within it – has stood in our way; or rather, it predisposes us to be agreeable. ‘We therefore need,’ she writes, ‘urgently to think about how some of the pleasures of academic work (or at least a deep love for the “myth” of what we thought being an intellectual would be like …) bind us more tightly into a neoliberal regime.’

Gill is describing an instance of what the American scholar Lauren Berlant calls ‘cruel optimism’. People open themselves to exploitation when the sense of self-worth that derives from doing something they believe in comes up against a hierarchical authority that is secretive, arbitrary and ruthless. Cruel optimism afflicts the colleague who agrees to yet another change of policy in the hope that it will be the last one. The cruel optimism that motivates the colleagues who undertake examining for the REF has grown out of a long, deeply held belief in the value of knowledge and the wish to pass it on – from one person to another, from one generation to the next. Yet university life has depended on this willingness of colleagues to undertake all manner of tasks above and beyond the ordinary job, reading one another’s work, writing recommendations, making nominations, translating, assessing and examining and sitting on councils and external bodies, developing analyses and plans, arranging for this and that conference or lecture or seminar series, without every moment and every act being quantified and calculated. Not everything that is valuable can be measured. But I am talking as if the chief sufferers from cruel optimism are teachers. This is of course not the case; students are above all the victims. The new managers want to pack ’em in and pile ’em high – and then neglect their interests by maltreating their teachers.

Since I first encountered the notion of a calling, I’ve found it a difficult category to expunge from my thought. It appeals to me greatly on a personal level: it points to the higher dimension to human experience which I believe tends to be ‘flattened out’ in the culture of liberal democracies. It helps us attend to the possibility of work that is meaningful and non-alienated so as to give shape to a life and provide the qualitative distinctions of worth in relation to which we can orientate ourselves existentially.

However I find myself increasingly troubled by the appeal this has held for me, as well as how notions of this sort might buttress exploitation under contemporary conditions. For instance consider the ‘perils of passion’ in the video game industry, as detailed in this excellent Jacobin article:

Again and again, when you read interviews or watch industry trade shows like E3, “passion” is used as a word to describe the ideal employee. Translated, “passion” means someone willing to buy into the dream of becoming a video game developer so much that sane hours and adequate compensation are willingly turned away. Constant harping on video game workers’ passion becomes the means by which management implicitly justifies extreme worker abuse .

And it works because that sense of passion is very real. The first time that you walk through the door at an industry job, you’re taken with it. You enter knowing that every single person in the building shares a common interest with you and an appreciation for the art of crafting a game. Friendships can be built immediately – to this day, many of my best friends arose from that immediate commonality we all had on the job.

This is an incredibly enticing proposition; no one who goes in is completely immune to it, no matter how far down the totem pole of life’s interests gaming is. And there are few other jobs quite like it.

Geek culture takes such strongly held commonalities of interest and consumption far more seriously than most other subcultures. I recently wrote a piece for this publication which was, in part, about the replacement of traditional class, gender, and racial solidarity with a culture of consumption. Here, in the video game creation business, is the way capital harnesses geek culture to actively harm workers. The exchange is simple: you will work 60-hour weeks for a quarter less than other software fields; in exchange, you have a seat at the table of your primary identifying culture’s ruling class.

This isn’t a new phenomenon. Another example can be found in the comics industry, as far back as the early days of the contemporary corporations. With the original creators leaving, having scarcely been rewarded for much of the creative labour underlying the emergence of Marvel Comics, the corporation turned to “a new generation of creators, wide-eyed twenty-somethings who flashed their old Merry Marvel Marching Society badges as though they were licenses for breaking rules”. The grievances of those original creators faded from view as their creations inspired a new generation willing to work under precisely the conditions which had forced their predecessors to leave.

What about higher education? Does a sense of social science as a calling leave people continuing to chase a career which is in reality only available to a fraction of those pursuing it? Does it lead to an acceptance of precarity as a way of life, with the harsh realities of labour relations within the academy being softened by the rewarding ideal of a calling? Part of my political and theoretical problem here is that I don’t want to fall into the trap of denying the reality of passion by reducing it to an instrument of exploitation. Doing so makes it difficult to explain precisely why people persist in these fields in the way that they do. But we must conversely refuse a naive reading of ‘calling’, which I see in terms of a cluster of concepts of which ‘passion’ is just one, in moral terms so as to neglect this pernicious systemic trend.

Another way to frame this question: how seriously should we take latte art? I’ve more than once had a conversation with a barista about this practice who clearly takes great satisfaction from it. However it’s hard not to wonder if this is a cynical attempt to introduce craft and creativity into a job which some would consider the archetype of zero hours employment. I’d love to visit latte art competitions in an ethnographic capacity to explore how seriously the participants take these endeavours and how pervasively such events are permeated by corporate imperatives. Till that day, I’m left to speculate that this is a case of craft being encouraged by owners for reasons that are largely self-serving, even if they understand their motivations in terms of a benign concern for the well-being of their employees.