In From Counterculture to Cyberculture, Fred Turner analyses how digital technology came to be seen as capable of liberating the individual, freeing them from the shackles of petty attachments to organisations and places. This is a complex story but it’s one in which cultural entrepreneurs figure prominently, carving out modes of living which later percolated through the emerging cyberculture as ideals to be imitated. One early such figure was Nicholas Negroponte, founder of the MIT Media Lab, described on loc 2677:

As LSD and a beat-up school bus had once freed Kesey to roam the American landscape with a tribe of friends, so digital technologies now allowed Negroponte to turn work into play. “Some of us enjoy a privileged existence where our work life and our leisure life are almost synonymous,” he told Brand. “More and more people I think can move into that position with the coming of truly intimate technology.

The personal charisma of a figure like Negroponte plays an important part in their coming to serve as an exemplar, embodying a desirable form of life which invites explanation in terms of emerging notions of digitally-driven social change and in turn contributes to these changes through cultural elaboration. From loc 2685:

If the Lab demonstrated the way a “wired” world might look, then Negroponte was the image of the social possibilities such a world might offer. Mobile, wealthy, handsome, some, completely networked in both the technological and the political sense, Negroponte was a new kind of man. As an echo of Marshall McLuhan, though, he was also the reincarnation of an earlier generation of hero. Like the Media Lab he headed, Negroponte was the living bridge between the legacy of cybernetics and the legacy of countercultural experimentation.

George Gilder was another figure who was glamorised in this way. As Turner observes on loc 3353, his hectic schedule was held up as embodying a liberated life. His peripatetic working patterns were exciting and profitable:

Much as other Wired writers had celebrated brated the members of the Electronic Frontier Foundation or the Global Business Network for their social connections, Bronson dwelled at length on Gilder’s hectic schedule of appearances, his migrations from tech company to tech company, and his twenty-thousand-dollar speaking fees. Gilder appeared peared to be a pattern of information, shuttling from node to node along a web of elite institutions. In case the reader missed the point, Bronson depicted picted Gilder literally speaking in the machine language of zeros and ones.

As Turner puts it on loc 3366, “Wired had offered the freelance lance lifestyle of a high-profile consultant as a model of the independent lifestyle ostensibly becoming available to the digital generation as a whole“. This equivocation is an important one, seemingly at least a little bit dishonest when we consider how aware Wired were of the particular demographic they were pursuing. From loc 3233-3241:

In a 1992 business plan, Rossetto and Metcalfe had described their target audience to potential investors as “Digital Visionaries.”.” With annual incomes averaging $75,000 a year, this group represented “The top ten percent of creators, managers, and professionals in the computer puter industries, business, design, entertainment, the media and education.” In the coming years, Wired reached this group with extraordinary success. Less than three years after the first issue appeared, for instance, when Wired was selling 300,000 copies a month, its readers were 87.9 percent male, 37 years old on average, with an average household income of more than $122,000 per year. In a reader survey, more than 90 percent of subscribers scribers identified themselves as either “Professional/Managerial” or “Top Management.”

The idiots so wonderfully satirised in Nathan Barley are the children of these visionaries, sufficiently immersed in the emergent culture that any sense of transition has been lost. But the ideal of the ‘digital visionary’, something to which the ranks of digital nomads might find themselves aspiring, has a currency all the more powerful for it having lost touch with the conditions which gave rise to it.

This bullshit came from somewhere and it felt a certain way to the people who first encountered it. We can’t explain its subsequent iterations, as well as the cultural power it has exercised, without appreciating these origins. But it’s still with us, identifiable in the propensity to find certain people shiny and certain lifestyles alluring.

It intersects with other cultural trends, such as the ‘road warriors’ explored in Up In The Air, lending them an epochal lure by association, as if living life in this way leaves one at the bleeding edge of social change, bringing the new world into being through the very act of living one’s life:

I’m interested in these lifestyles, valorising acceleration and the pleasures associated with it, as forms of life which emerged under conditions of socio-technical change. They became logistically possible, financially possible for some (though not others) and represented in popular culture. What effect did this have on how people saw the options available to them in life? How has it shaped our unspoken understandings of what it is to live life ‘fully’? What political work has this inadvertently achieved?

As Turner describes on loc 2582, what now seem to many like regressive views (valorising the freelance economy as inherently liberating to workers) were at the time radical cultural sentiments, at odds with the prevailing socio-economic order:

But Barlow’s account of cyberspace also mingled the countercultural critique of technocracy with a celebration of the mobility and independence required of information workers in a rapidly networking economy: I’m a member of that half of the human race which is inclined to divide the human race into two kinds of people. My dividing line runs between the people who crave certainty and the people who trust chance…. Large organizations and their drones huddle on one end of my scale, busily trying to impose predictable homogeneity on messy circumstance. On the other end, free-lancers and ne’er-do-wells cavort about, getting by on luck if they get by at all.

In its most extreme versions, this liberation could be from embodiment itself: as Barlow once wrote, “In this silent world, all conversation is typed. To enter it, one foresakes both body and place and becomes a thing of words alone”.

This was a radical and profound freedom, particularly in the context of a post-60s counterculture that had raised itself on a hostility towards the stifling bureaucracy of post-war American life. But these lofty, even metaphysical ideas, emerged alongside networked employment, providing a powerful framing which obscured the specificity of economic relations that would soon be generalised throughout the social order. However, the challenge is to recognise this ideological function while nonetheless acknowledging the novelty of this form of life. From loc 867:

Only the freestanding individual “could find the time to think in a cosmically adequate manner,” he explained. Fuller himself lived accordingly: for most of his career, he migrated among a series of universities and colleges, designing projects, collaborating with students and faculty – and always claiming the rights to whatever the collaborations produced.

This image of “an entrepreneurial, individualistic mode of being that was far from the world of the organization man” (loc 775) is still with us. Living freely, living passionately, living everywhere. It’s a powerful ideal, floating free within our contemporary culture, with specific roots in a peculiarly American tradition.

This powerful essay by Maria Warner in the LRB echoes what I was trying to say yesterday about the perils of passion:

A university is a place where ideas are meant to be freely explored, where independence of thought and the Western ideals of democratic liberty are enshrined. Yet at the same time as we congratulate ourselves on our freedom of expression, we have a situation in which a lecturer cannot speak her mind, universities bring in the police to deal with campus protests, and graduate students cannot write publicly about what is happening (one of my students was told by management to take down the questions she raised on Facebook). Gagging orders may not even be necessary. Silence issues from different causes: from fear, insecurity, precarious social conditions and shame. It is the shame of the battered wife that allows her husband to count on her silence. I recognise, for example, the compunction in the words of Rosalind Gill in her fine article ‘Breaking the Silence: The Hidden Injuries of the Neo-Liberal University’.​5 She nearly didn’t write the piece, she says, because she felt that ‘pointing to some of the “injuries” of British academic life had a somewhat obscene quality to it given our enormous privileges relative to most people in most of the world’. She felt ashamed to be complaining about conditions at work because she was in it ‘for the satisfaction, not the money’. The managers count on that feeling – in others, not themselves. Gill recognises that the very sense of specialness that still attaches to the idea of being a teacher or a professor – especially for women, after our late acceptance into the profession and our erratic and precarious progress within it – has stood in our way; or rather, it predisposes us to be agreeable. ‘We therefore need,’ she writes, ‘urgently to think about how some of the pleasures of academic work (or at least a deep love for the “myth” of what we thought being an intellectual would be like …) bind us more tightly into a neoliberal regime.’

Gill is describing an instance of what the American scholar Lauren Berlant calls ‘cruel optimism’. People open themselves to exploitation when the sense of self-worth that derives from doing something they believe in comes up against a hierarchical authority that is secretive, arbitrary and ruthless. Cruel optimism afflicts the colleague who agrees to yet another change of policy in the hope that it will be the last one. The cruel optimism that motivates the colleagues who undertake examining for the REF has grown out of a long, deeply held belief in the value of knowledge and the wish to pass it on – from one person to another, from one generation to the next. Yet university life has depended on this willingness of colleagues to undertake all manner of tasks above and beyond the ordinary job, reading one another’s work, writing recommendations, making nominations, translating, assessing and examining and sitting on councils and external bodies, developing analyses and plans, arranging for this and that conference or lecture or seminar series, without every moment and every act being quantified and calculated. Not everything that is valuable can be measured. But I am talking as if the chief sufferers from cruel optimism are teachers. This is of course not the case; students are above all the victims. The new managers want to pack ’em in and pile ’em high – and then neglect their interests by maltreating their teachers.

Since I first encountered the notion of a calling, I’ve found it a difficult category to expunge from my thought. It appeals to me greatly on a personal level: it points to the higher dimension to human experience which I believe tends to be ‘flattened out’ in the culture of liberal democracies. It helps us attend to the possibility of work that is meaningful and non-alienated so as to give shape to a life and provide the qualitative distinctions of worth in relation to which we can orientate ourselves existentially.

However I find myself increasingly troubled by the appeal this has held for me, as well as how notions of this sort might buttress exploitation under contemporary conditions. For instance consider the ‘perils of passion’ in the video game industry, as detailed in this excellent Jacobin article:

Again and again, when you read interviews or watch industry trade shows like E3, “passion” is used as a word to describe the ideal employee. Translated, “passion” means someone willing to buy into the dream of becoming a video game developer so much that sane hours and adequate compensation are willingly turned away. Constant harping on video game workers’ passion becomes the means by which management implicitly justifies extreme worker abuse .

And it works because that sense of passion is very real. The first time that you walk through the door at an industry job, you’re taken with it. You enter knowing that every single person in the building shares a common interest with you and an appreciation for the art of crafting a game. Friendships can be built immediately – to this day, many of my best friends arose from that immediate commonality we all had on the job.

This is an incredibly enticing proposition; no one who goes in is completely immune to it, no matter how far down the totem pole of life’s interests gaming is. And there are few other jobs quite like it.

Geek culture takes such strongly held commonalities of interest and consumption far more seriously than most other subcultures. I recently wrote a piece for this publication which was, in part, about the replacement of traditional class, gender, and racial solidarity with a culture of consumption. Here, in the video game creation business, is the way capital harnesses geek culture to actively harm workers. The exchange is simple: you will work 60-hour weeks for a quarter less than other software fields; in exchange, you have a seat at the table of your primary identifying culture’s ruling class.

This isn’t a new phenomenon. Another example can be found in the comics industry, as far back as the early days of the contemporary corporations. With the original creators leaving, having scarcely been rewarded for much of the creative labour underlying the emergence of Marvel Comics, the corporation turned to “a new generation of creators, wide-eyed twenty-somethings who flashed their old Merry Marvel Marching Society badges as though they were licenses for breaking rules”. The grievances of those original creators faded from view as their creations inspired a new generation willing to work under precisely the conditions which had forced their predecessors to leave.

What about higher education? Does a sense of social science as a calling leave people continuing to chase a career which is in reality only available to a fraction of those pursuing it? Does it lead to an acceptance of precarity as a way of life, with the harsh realities of labour relations within the academy being softened by the rewarding ideal of a calling? Part of my political and theoretical problem here is that I don’t want to fall into the trap of denying the reality of passion by reducing it to an instrument of exploitation. Doing so makes it difficult to explain precisely why people persist in these fields in the way that they do. But we must conversely refuse a naive reading of ‘calling’, which I see in terms of a cluster of concepts of which ‘passion’ is just one, in moral terms so as to neglect this pernicious systemic trend.

Another way to frame this question: how seriously should we take latte art? I’ve more than once had a conversation with a barista about this practice who clearly takes great satisfaction from it. However it’s hard not to wonder if this is a cynical attempt to introduce craft and creativity into a job which some would consider the archetype of zero hours employment. I’d love to visit latte art competitions in an ethnographic capacity to explore how seriously the participants take these endeavours and how pervasively such events are permeated by corporate imperatives. Till that day, I’m left to speculate that this is a case of craft being encouraged by owners for reasons that are largely self-serving, even if they understand their motivations in terms of a benign concern for the well-being of their employees.