In 1988 Pierre Bourdieu chaired a commission reviewing the curriculum at the behest of the minister of national education. The scope of the review was broad, encompassing a revision of subjects taught in order to strengthen the coherence and unity of the curriculum as a whole. In order to inform this work, the commission early on formulated principles to guide their endeavour, each of which were then expanded into more substantive observations concerning their implications.

One of these stood out to me as of great contemporary relevance for the social sciences in the digital university. Their principle considers those “ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”. In other words, what are the elements of educational practice which are integral to it and how can we assure their succesful transmission in training? These include “fundamental ways of thinking” such as “deduction, experiment, and the historical approach, as well as reflective and critical thinking which should always be combined with the foregoing” and “the specific character of the experimental way of thinking”, “a resolute valuation of qualitative reasoning”, a clear recognition of the provisional nature of explanatory models” and “ongoing training in the practical work of research”. It extends this discussion to the technologies used in practice:

Finally, care must be taken to give major place to a whole series of techniques that, despite being tacitly required by all teaching, are rarely the object of methodical transmission: use of dictionaries and abbreviations, rhetoric of communication, establishment of files, creation of an index, use of records and data banks, preparation of a manuscript, documentary research, use of computerised instruments, interpretation of tables and graphs, etc.

Political Interventions: Social Science and Political Action, pg 175

This concern for the “technology of intellectual work” is one from which we could learn a lot, as well as the importance placed upon “rational working methods (such as how to choose between tasks imposed, or to distribute them in time)”. It maps nicely onto what C. Wright Mills described as intellectual craftsmanship. When we consider the technologies of scholarly production – things like notebooks, word processors, index cards, post it notes, print outs, diagrams and marginalia – our interest is in their use-in-intellectual-work. The technologies become something quite specific when bound up in intellectual activity:

But how is this file – which so far must seem to you more like a curious sort of ‘literary’ journal – used in intellectual production? The maintenance of such a file *is* intellectual production. It is a continually growing store of facts and ideas, from the most vague to the most finished.

The Sociological Imagination, pg 199-200

If we recognise this, we overcome the distinction between theory and practice. The distinction between ‘rational working methods’, ‘technology of intellectual work’ and ‘fundamental ways of thinking’ is overcome in scholarly craft. The role of the technology is crucial here: if we suppress or forget the technological, transmission of these practices is abstracted from their application, leaving their practical unfolding to be something which has to be discovered individually and privately (“ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”). But places for discussion of craft in this substantive sense have been the exception rather than the rule within the academy.

Perhaps social media is changing this. It is facilitating a recovery of technology, now finding itself as one of the first things social scientists discuss when they enter into dialogues through social networks and blogs. But it also facilitates what Pat Thompson has described as a feral doctoral pedagogy:

Doctoral researchers can now access a range of websites such as LitReviewHQ, PhD2Published and The Three Month Thesis youtube channel. They can read blogs written by researchers and academic developers e.g. Thesis Whisperer, Doctoral Writing SIG, Explorations of Style, and of course this one. They can synchronously chat on social media about research via general hashtags #phdchat #phdforum and #acwri, or discipline specific hashtags such as #twitterstorians or #socphd. They can buy webinars, coaching and courses in almost all aspects of doctoral research. Doctoral researchers are also themselves increasingly blogging about their own experiences and some are also offering advice to others. Much of this socially mediated DIY activity is international, cross-disciplinary and all day/all night. researchers 

There can be problematic aspects to this. But when it’s valuable, it’s at the level of precisely the unity of thinking, technology and activity which the commission advocated. Social media is helping us recover the technology of intellectual work and it’s an extremely positive development for the social sciences.

One recurring theme in Brad Stone’s excellent The Upstarts is how technological assumptions encoded into legislation become focal points for conflicts with ‘disruptive’ companies. For instance, as loc 2348 illustrates, the novel dispatch system used by Uber complicated the distinction between taxis and livery cars:

Stressing that Uber cars were not hailed or even electronically hailed like taxis, the pair emphasized that Uber cars fit the legal definition of livery cars and were prearranged; it just so happened that the prearrangement occurred five minutes ahead of time instead of sixty.

But these distinctions also become ideologically loaded, with the antiquated assumptions effectively inviting us to sweep them away as part of our great disruptive project. This is something even Uber’s lawyers were moved by. From loc 2996:

She didn’t, pointing out that taxi regulations had been crafted decades before smartphones and internet ratings systems were invented. “I was personally always of the philosophy that the great companies, the PayPals of the world, don’t get scared by regulation,” she told me. “I never wanted to be the kind of lawyer that just said no.”

I’m currently reading Thomas Frank’s One Market Under God, a remarkably prescient book published in 2000 which has a lot of insight into contemporary cultures of technological evangelism. The book is concerned with what Frank sees as a transition in American life from a form of populism predicated on cultural reaction to one grounded in the worship of the market. It’s possible I’m primed to see this analysis as prescient because I’m working my way backwards through his books and One Market Under God contains the seeds of an analysis that he developed over the next sixteen years.

Nonetheless, I think we can learn much about our present circumstances by looking back to this transitional point in the roaring 90s which saw the origin of the rightward turn of social democratic parties, mass digitalisation and the first Silicon Valley gold-rush. What I’m increasingly preoccupied by is how these events were intimately connected. In other words: how do we place the ascendancy of the technology sector in social and economic history? To my surprise, Thomas Frank’s book actually addresses this question more straight-forwardly than any other I can think of apart from Platform Capitalism, though of course many accounts address these issues without systematically investigating them.

Despite the 1990s being hailed as an era of democratisation driven by a booming economy, Frank insists that we recognise that “The booming stock market of the nineties did not democratize wealth; it concentrated wealth” (loc 1973). But this chimera of continually ascending stock prices, grounded in the rampant speculation of the dot com boom, helped license an ideological transition that Frank describes on loc 2027:

both parties came around to this curious notion, imagining that we had somehow wandered into a sort of free-market magic kingdom, where ever-ascending stock prices could be relied upon to solve just about any social problem. Now we could have it all: We could slash away at the welfare state, hobble the unions, downsize the workforce, send the factories to Mexico—and no one would get hurt!

The ideological work involved in maintaining we had entered a new era of perpetual growth, beyond boom and bust, relied upon the mystique of the internet. It heralded the dawn of a new world, the end of old certainties and a constant horizon of possibility to be invoked in the face of those exhibiting an anachronistic scepticism. From loc 1659:

And yet, since the moment the Internet was noticed by the mainstream media in 1995, it has filled a single and exclusive position in political economy: a sort of cosmic affirmation of the principles of market populism. “Think of the Internet as an economic-freedom metaphor for our time,” wrote bull-market economist Lawrence Kudlow in August 1999.45 “The Internet empowers ordinary people and disempowers government.” And we were only too glad to do as Kudlow instructed us, to think of it in precisely this way. In fact, so closely did the Internet and market populism become linked in the public mind that whenever a pundit or journalist mentioned the Web, one braced oneself for some windy pontification about flexibility, or the infinite mobility of capital, or the total and unappealable obsolescence of labor, government, and any other enemy of the free-market enterprise.

Somewhat more prosaically, the companies of Silicon Valley became emblems of a new anti-elitism, with the old formalities of corporate life being replaced by a hierarchical ethos that lionised the entrepreneur for their authentic living, often expressed in ‘working hard and living hard’. The practice of paying stock options in lieu of wages became a cypher for shareholder democracy, an idea which was seized upon as legitimating what were in reality vicious attacks upon the security of labour. However as Frank points out on loc 2063, the reality of this in Silicon Valley was presented misleadingly as a sign of a brave new workplace culture rather than a familiar self-interest:

It may have been fun to imagine what these enchanted options could do in the service of economic democracy, but in point of fact their powers were almost always directed the other way. Options did not bring about some sort of “New Economy” egalitarianism; they were one of the greatest causes of the ever widening income gap. It was options that inflated the take-home pay of CEOs to a staggering 475 times what their average line-worker made; it was options that made downsizing, outsourcing, and union-busting so profitable. When options were given out to employees—a common enough practice in Silicon Valley by decade’s end—they often came in lieu of wages, thus permitting firms to conceal their payroll expenses and artificially inflate the price of their shares, pumping the bubble still further.17 Options were a tool of wealth concentration, a bridge straight to the nineteenth century.

What seems hugely important to me here is the recognition that the vast concentration of wealth that took place in the 1990s was deeply tied up, structurally and culturally, with the first wave of mass digitalisation brought about by the dot com bubble. The nature of that entanglement still isn’t as clear to me as I would like, but I’m increasingly confident in my claim that the analysis of digitalisation needs to be an integral part of the analysis of capitalism from the 1970s onwards.

As important as economic history is though, it’s crucial that we also understand the cultural dimensions to this process. What I really like about Thomas Frank is his commitment to taking business bullshit seriously. From loc 1787:

It is worth examining the way business talk about itself, the fantasies it spins, the role it writes for itself in our lives. It is important to pay attention when CEOs tell the world they would rather surf than pray, show up at work in Speedos rather than suits, hang out in Goa rather than Newport, listen to Stone Temple Pilots rather than Sibelius. It is not important, however, in the way they imagine it is, and for many Americans it is understandably difficult to care very much whether the guy who owns their company is a defender of family values or a rave kid. But culture isn’t set off from life in a realm all its own, and the culture of business in particular has massive consequences for the way the rest of us live.

Our contemporary discourse of ‘disruption’ and ‘innovation’ was nurtured in the business commentary of the late 1990s. By examining its origins, we can see the political context of this way of thinking and speaking about technology much more transparently than is the case if we examine contemporary instances of it. To close with a quote from Peter Schwartz, quoted on loc 1321:

Open, good. Closed, bad. Tattoo it on your forehead. Apply it to technology standards, to business strategies, to philosophies of life. It’s the winning concept for individuals, for nations, for the global community in the years ahead.

From The Monsters of Educational Technology, by Audrey Watters, loc 563:

Why are we building learning management systems? Why are we building computer-assisted instructional tech? Current computing technologies demand neither. Open practices don’t either. Rather, it’s a certain institutional culture and a certain set of business interests that do. What alternatives can we build? What can we imagine? Can we envision a future of learner agency, of human capacity, of equity, of civic responsibility, of openness for example?

This isn’t a new idea but I’ve rarely encountered it expressed so concisely:

The idea that individuals create wealth and that all governments do is come along and tax them is what Varoufakis calls “a preposterous reversal of the truth”.

“There is an amazing myth in our enterprise culture that wealth is created individually and then appropriated by the state to be distributed.

“We are conceptualising what is happening in society as if we are an archipelago of Robinson Crusoes, everybody on an island, creating our own thing individually and then a boat comes along and collects it and redistributes it. It’s not true. We are not individual producers, we produce things collectively.”

He points to an iPhone.

“This machine, inside of it, contains technologies that were created collectively. Not only through collaboration but a lot of public funding. Every single technology in there was created by government grant.”

As you may know, executive coaching is an increasingly common phenomenon, particularly in some sectors like tech. This is how Eric Schmidt and his co-author describe the necessity of it in How Google Works loc 2440:

Whenever you watch a world-class athlete perform, you can be sure that there is a great coach behind her success. It’s not that the coach is better at playing the sport than the player, in fact that is almost never the case. But the coaches have a different skill: They can observe players in action and tell them how to be better. So why is it that in the business world coaches are so unusual? Are we all like Eric when he started at Google, so confident of ourselves that we can’t imagine someone helping us to be better? If so, this is a fallacy. 

As a business leader, you need a coach. The first ingredient of a successful coaching relationship is a student who is willing to listen and learn. Just like there are hard-to-coach athletes, there are hard-to-coach executives. But once they get past that initial reticence, they find there are always things to learn. Business coaches, like all coaches, are at heart teachers, and Bill Campbell, the best coach around, tells us he believes that management is a skill that is completely learnable.

This is something which suggests an obvious comparison to sports, not just in terms of the language used to describe this relationship. James Surowiecki, author of Wisdom of Crowds, draws out the connection in an interesting essay about the increasing competitive advantage accrued when performance is already at a top level:

The key part of the “performance revolution” in sports, then, is the story of how organizations, in a systematic way, set about making employees more effective and productive. This, as it happens, is something that other organizations started doing around the same timeline.

But can managerial performance really be measured in these terms? I don’t think it can and the belief to the contrary strikes me as a really interesting conceit, reflecting interestingly on the culture of managerialism: a kind of moral athleticism amongst prominent CEOs in which they aspire to be all that they can be

If we look at the same phenomenon further down the organisational ladder, we get to enforced performance reviews and sanctions ensuing from a failure to meet imposed expectations. We get to sleepless night and diffuse anxiety saturating into everyday life, all generated by concerns over ‘performance’. Coaching still exists but it becomes a very different phenomenon, as this interview I did about the sociology of work-life coaching suggests:

Coaching usually consists of individual or group meetings that continue for a few months. In the beginning of these meetings, a goal is set for the whole coaching process, and then the process continues with for example personality tests or exercises that the clients do in order to achieve the set goal. The coaches that I interviewed were often a bit vague in their answers when I asked about the specific practices of coaching. They would rather talk about ‘realising the inner potential of the individual’, though what this means specifically is rather unclear.

In general, it seems that coaching is for most part about discussing one’s hopes and realities with the coach and getting feedback for both the exercises and tests and for the plans that one has and the actions that one takes. The focus on ‘potential’ is telling of how coaching is quite oriented towards the future but at the same time relies on something that is thought to already exist within the self. As it happens, coaching concentrates on the individual. This means that all the work that is done in coaching centers on changing oneself in order to achieve the goals that one wants to achieve. 

This is reflected in the practices of coaching in the sense that they demand self-reflexivity and focus on getting to know oneself and reflecting for instance on one’s personality with the help of tests and exercises. In terms of employment, this means that questions that concern wider social structures or even organisational structures are left outside the scope of the things one needs to change. It thus begins to seem that change always starts within the individual self – and also that if there is a need for change it is the self that is at fault. In the case of unemployment then, for example, the structural reasons for unemployment are not accounted for but rather it is thought that if the individual just works hard enough to change themselves then they will also find employment – and if one is unemployed it just means that one has not yet found the ‘true self’ and the right goals that would solve the problem. In other words, if one does not find work, it is implied that this just means that one has not worked hard enough on improving oneself.

As a relational technology of the self, work coaching has to be read against the background of metricisation. It naturalises metrics and their attendant apparatus of control, scrutiny and intervention. The issue becomes a narrow one of ‘performance’ rather than one’s place over time within an organisation.

I’ve nonetheless become a bit obsessed with Bill Campbell. He turns up time and time again in business books about Silicon Valley. It also turns out he was actually a football coach originally:

 Son of a local school official, Campbell was born and raised in Homestead, Pennsylvania, near Pittsburgh. He attended Columbia University where he played football under coach Buff Donelli from 1959 to 1961. In his senior year, he was named to the All-Ivy Team. He graduated in 1962 with a bachelor’s degree in economics. In 1964, he obtained a master’s degree in education from Teachers College, Columbia University.[2] He was head coach of Columbia’s football team, the Columbia Lions from 1974 to 1979. Prior to this he was an assistant at Boston College for six years. He met his first wife, the former Roberta Spagnola, while she was the assistant dean in charge of Columbia’s undergraduate dormitories.

He joined J. Walter Thompson, the advertising agency, then Kodak where he rose to run Kodak’s European film business. Hired by John Sculley he became Apple’s VP of Marketing, then ran Apple’s Claris software division. When Sculley refused to spin Claris off into an independent company, Campbell and much of the Claris leadership left. Since 1997, when Steve Jobs returned to Apple, Campbell has served as a corporate director on Apple’s board of directors.

Campbell became CEO of GO Corporation, a startup pioneering a tablet computer operating system. After successfully selling GO Eo to AT&T Corporation in 1993, Campbell was CEO of Intuit from 1994 to 1998. Campbell announced that he would be retiring as the Chairman of the Board of Directors at Intuit starting January 2016.[3]

Campbell is an adviser to a number of technology companies, and was elected Chairman of the Board of Trustees at Columbia in 2005.

According to CNN Money, he is worth $200 million.[4]

To what extent is it a marker of prestige to be coached by Campbell? Is it still a status symbol for lesser executives to be coached by lesser coaches? Do these celebrity coaches and celebrity clients underwrite the demand elsewhere? Do all these coaches have top level business experience?

From Spam, by Finn Brunton, pg 89:

“Alan Solomon . . . a veteran antivirus researcher with a PhD in economics, critiqued the virus metaphor, suggesting that this medical/ biological metaphor of ‘virus’ is ‘too emotive’ . . . Instead, he proposed ‘weeds’ as a more appropriate concept for describing the threat of computer code.” 66 With “weeds” comes a very different culture of metaphors, of strong and weak ecosystems, each person cultivating their own garden every day to keep invasive species at bay. It is a much better metaphor for expressing one of the global computer network’s key points of weakness to “viral infection”: the monoculture of computers running the Microsoft Windows operating system, often poorly patched and unmaintained by users, making the network as vulnerable as the cloned Cavendish banana trees are to fungus attacks. Without overstating the influence of metaphor, it’s striking to consider how much that nomenclature might have changed the practices of security and programming around self-replicating computer code: computers as gardens rather than bodies, with diverse software populations to be tended and pruned by attentive and self-reliant users, potentially capable of weed resistance in their interdependence, with the professionals as agronomists, breeders, and exterminators rather than doctors at the cordon sanitaire. 67

An interesting case discussed on pg 85 of Unforbidden Pleasures, by Adam Phillips:

We may live in the aftermath of the myth of the Fall, and the even longer aftermath of the myth of Oedipus, but the first traffic lights were invented in the United States after the First World War. The traditional mutual accommodation travellers had been making to each other on their bikes and cars and carts was replaced by a set of lights. ‘Its purpose,’ the anthropologist James C. Scott writes, ‘was to prevent accidents by imposing an engineered scheme of coordination.’ Partly through pressure of numbers and the proliferation of vehicles, and partly through the new scientific and bureaucratic fantasies of efficiency and productivity, familiar forms of cooperation gave way to a new, technologically implemented, set of rules. People’s practical judgement was delegated to a red light. They had known when to stop, but now they were being told when to stop.

The institutionalisation of algorithmic evaluation should be seen as continuous with this much longer standing process of using technology to impose patterns upon, often entirely substituting for, situated judgement. It’s a new means towards a long standing end, rather than something radically discontinuous with what has come before.

In fact recognising this poses an apparent challenge for the argument I’ve been developing about the increasing ‘cognitive load’ of digital capitalism. Surely systems which support or substitute for reflexivity actually decrease the burden upon subjects? My argument will be that the experienced necessity for these systems develops in relation to their unintended consequences towards ratcheting up the burden in other ways. But it’s currently rather light on detail.

One of my major irritants is technological metaphors for subjectivity, not least of all because I slip into invoking them myself when I use terms like ‘cognitive load’. The underlying idea that ‘the brain is like a computer’, as well as the complex network of associated metaphors leading from it, frustrates me because it seems so obviously to be a case of fetishising our own productions. We see ourselves in what we have made and seek to understand ourselves through the characteristics of our artefacts. But as this extract from The Innovators, by Walter Isaacson, Loc 1049 illustrates, our understanding of our subjectivity furnished metaphors for the architecture of the machines which we subsequently use to understand ourselves:

Atanasoff initially considered building an analog device; his love of slide rules led him to try to devise a supersize version using long strips of film. But he realized that the film would have to be hundreds of yards long in order to solve linear algebraic equations accurately enough to suit his needs. He also built a contraption that could shape a mound of paraffin so that it could calculate a partial differential equation. The limitations of these analog devices caused him to focus instead on creating a digital version. The first problem he tackled was how to store numbers in a machine. He used the term memory to describe this feature: “At the time, I had only a cursory knowledge of the work of Babbage and so did not know he called the same concept ‘store.’ . . . I like his word, and perhaps if I had known, I would have adopted it; I like ‘memory,’ too, with its analogy to the brain.” 30

From Addiction By Design, by Natasha Dow Schüll, pg 19:

In a strategic response to growing suggestions that gambling machines are to some extent implicated in gambling addiction, the American Gaming Association released a 2010 white paper called “Demystifying Slot Machines.” Echoing the National Rifle Association’s (NRA) famous slogan— “Guns Don’t Kill People, People Kill People”— the paper asserts that “the problem is not in the products [players] abuse, but within the individuals.”  In this one- sided account, the machine is merely “the mechanism through which pre- existing psychological disturbances are expressed,” as a researcher puts it.  “What gaming critics fail to understand,” a reporter for Global Gaming Business sums up, is that “machines are simply inanimate objects.” 

From The Boy Kings, by Katherine Losse, pg 134:

That Sunday, after I’d slept off our long night, I logged in to Facebook to see an endless stream of videos that the boys had filmed at the club. In them, the boys were not chatting up or kissing girls they had met, as I had expected. Instead, they were performing an elaborate ritual only they would have the strange, cold vanity to invent, in which they would methodically chat up and reject girls that the bouncers had brought to their table. “Leave! You’re not pretty enough!” one of them seemed to say over the din of the club as he shooed the girls away in succession like so many servants. Even though I had been living in this boys’ world for almost two years, I was still a bit shocked. Their products ultimately reflected their real- life behavior. Instead of making a technology of understanding, we seemed sometimes to be making a technology of the opposite: pure, dehumanizing objectification. We were optimizing ways to judge and use and dispose of people, without having to consider their feelings, or that they had feelings at all.

The intruiging suggestion made by Losse is that these tech bros represent an epochal transformation in American alpha masculinity. She doesn’t really follow it up but I’m completely persuaded that tech bros, as well as bro culture in general, represent something of profound sociological significance.

From The Boy Kings, by Katherine Losse, pg 13:

I liked to listen to Mark’s discussion of the product philosophy and goals at these meetings, which were to me the most fascinating part of the job: what were we trying to do, with this fledgling Internet identity registration system? “I just want to create information flow,” he said in his still nearly adolescent voice, lips pursed forward as if jumping to the next word, and everyone would nod, all cogitating in their own way about what this meant. Mark’s idea of information flow, though vague, was also too vague to be disagreed with, and even if we came up with counter- instances to a model of pure information efficiency (for example, I wondered, do I want my Social Security number to flow freely?), we knew that we weren’t supposed to disagree. Mark was our leader, for better or worse. When the meetings ended he would say either “domination” or “revolution,” with a joking flourish of a fist, and everyone would laugh, nervously, but with a warm and almost chilling excitement. It was like we were being given a charter, by a boy younger than most of us, to take over the world and get paid to do it.

From Gates, by Stephen Manes and Paul Andrews, loc 9378:

Developers trying to cut deals with Microsoft often divulged their technology and/or their business plans. According to the complainants, Microsoft then used the knowledge for its own gain. The case of Go was the most widely publicized. The Silicon Valley startup, headed by ex-Lotusian Jerry Kaplan and Framework author Robert Carr, had shown Microsoft its technology for a new pen-based operating system with the understanding that Microsoft might want to write applications for it. Instead Microsoft later announced that it would adapt pen-based technology to Windows—a project headed by the chief engineer from the group that got an early look at Go’s effort. “Microsoft stretched the truth a lot,” said Dan Bricklin, co-inventor of VisiCalc and vice president of Slate, an independent developer working with both Go and Microsoft. “They’d have you believe they’ve been working as long as Go. We knew when it became more earnest. They clearly were reacting to Go and OEMs Go was dealing with.” “Stretching the truth,” on the other hand, was the way Slate’s chairman and CEO Vern Raburn described Go’s complaints. “The only thing Go did was they got Microsoft starting to think about it. Did Microsoft plagiarize? Did they lift? Did they take things? No. Absolutely not. Other than, well, what would it be like to use a pen?” Microsoft agreed: No code was stolen. And upon looking at other systems, the company tended to believe it could do better. As David Weise said of a similar scenario regarding a Micrografx product called Mirrors, “let’s just say [they] thought much better of their code than we ever did, and we didn’t steal a thing. It was just bad code.”

From The New Prophets of Capital by Nicole Aschoff, loc 730-744:

At the same time, society’s greatest inventions and innovations of the past two hundred years— rockets to the moon, penicillin, computers, the internet— were not bestowed upon us by lone entrepreneurs and firms operating in free markets under conditions of healthy competition. They were the work of institutions: CERN and the Department of Defense created the internet, while Bell Labs— a subdivision of AT&T, freed from market competition by federally granted monopoly rights— generated transistors, radar, information theory, “quality control,” and dozens of other innovations central to our epoch. 25 Nearly every advance in science, technology, and mathematics emerged from people working together at universities supported by government funding. Creativity and innovation come from many places. Companies produce influential innovations, but so do other institutions that operate outside the confines of the profit motive, competitive markets, and the bottom line.

From To Save Everything, Click Here by Evgeny Morozov. For a talk about dystopias I’m doing next month, I’m trying to consider the implications of this technology at the level of social ontology. What does it mean to see sinister possibilities inherent in ‘innovations’ like this? Is there anything we can say in the abstract about how likely these possibilities are to be realised? It strikes me that this is necessary, at least if we are to avoid an empiricist attitude of ‘wait and see’ on the one hand or the systematic suppression of technological change on the other.

Or consider a prototype teapot built by British designer- cum- activist Chris Adams . The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open- source hardware and software, is connected to a site called Can I Turn It On? ( ), which, every minute or so, queries Britain’s national grid for aggregate power- usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it’s easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook- compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot’s warnings about high usage by publicizing their irresponsibility among their Facebook friends? Social engineers have never had so many options at their disposal.

Much like my previous post, I’m cutting this from my chapter because it’s not good enough and doesn’t really progress my overall argument. I’m still keen to develop the point though so any feedback is much appreciated.

It helps us move beyond the increasingly influential notion of techno-genesis, in which human beings and technological artefacts are understood to be co-evolving (Hayles, p. 10). This idea is not inaccurate so much as it is unhelpful: invocations of the “co-evolutionary spiral in which humans and tools are continuously modifying each other” (ibid p. 30) correctly recognise mutual causation but lack any specificity about the sequencing of the process or the operative mechanisms underlying it. In so far as that we frame empirical questions in this way then the conceptual instruments used will tend to render specifically social referents opaque, leaving us mired in generalities about co-evolution and impeding our capacity to investigate why specific technologies produce specific effects in specific people under specific circumstances. The concept of techno-genesis or co-evolution lends itself to tracking co-occurrence of change rather than isolating the mechanisms responsible for particular changes under particular circumstances. We lose sight of the properties and powers of the interacting entities as our attention becomes ever more embroiled in the ceaseless dance of co-constitution, leaving us caught between general claims of tendencies and particular examples of cases, while leaving the domain of the real positioned between the two frustratingly free from interrogation.

The difficulty with such central conflationary approaches to the relationship between human beings and digital technology is that so much of what is at stake sociologically arises from the variability with which they can be seen to obtain. As will be discussed below, even if the socio-technical infrastructure can be (cautiously) claimed to be exercising structural effects at the global level, the diffusion of the consumer technology so integral to claims about the transformation of agency varies greatly both within nations and between them. For instance, to take the UK as an example, between 2006 and 2014, the proportion of adults using a computer daily rose from 45% top 73%. This was heavily segmented by age, with 25-34 and 35-44 year olds (86% and 86%) most frequently using a computer daily compared to only 42% of 65+ year olds. With regards to ubiquitous computing, 68% of adults had used mobile computing devices to access the internet when away from home or work in the last 3 months. Amongst 16 to 24 year olds this was true of almost 96%, contrasting to only 23% amongst those aged 65 or over (ONS 2014). The Media Consumer 2014 survey, conducted by IPSOS MORI for Deloitte but restricted to an “online methodology with 2,000 consumers” and unhelpfully lacking further methodological detail, found that 49% of households surveyed “owned at least one smartphone, tablet or personal computer and that these households are 12 times more likely to own six or more computing devices” than those households outside this ‘mass-geek’ category delineated by the survey.

Leaving aside the methodologically limitations of the findings, the survey nonetheless highlights an important polarisation in ownership of consumer technology: acquisition of consumer technologies tend towards intensification and these ‘mass-geek’ households are, perhaps unsurprisingly, relatively privileged, “three-quarters (72 per cent) of the mass-geek category were ABC1, only 28 per cent were C2D” (Deloitte 2014) So claims about the effects of mobile computing need to be understood in terms of the structured diffusion of these technologies amongst different groups within the population and, through doing so, we begin to move beyond the empiricism which too frequently characterises research in this area. The mechanisms conditioning this diffusion could easily constitute a paper in their own right: sufficient disposable income to purchase expensive consumer technologies is clearly necessary but there is clearly more at work here than income alone. Such an explanatory project must also address constraints upon the diffusion as well as the enablements[1]. The categories in which this diffusion is measured quantitatively should be treated cautiously as statistical artefacts (particularly as they appear in market research) but the actual patterns they fallibly track invite explanation in terms of real mechanisms that a realist sociology is well resourced to provide. Some of these mechanisms may be resolutely infrastructural but these in turn invite sociological explanation because technical infrastructure can only be sustained as a socio-technical system[2]. Furthermore, the mechanisms responsible for diffusion within a particular national context should not be assumed to hold internationally (though nor should this be rejected out of hand).

[1] With ecological constraints upon the continued diffusion of mobile computing devices, particularly given the dramatically sharp cycles of obsolescence built into each new generation of products (Featherstone 2012), representing one important factor which should not be overlooked.

[2] The 4G network launched in the UK in 2012 offered vast increases in speed over existing 2G and 3G networks. The accessibility of public wireless hotspots increased at the same time, increasingly becoming an expectation in many popular consumer locations and with companies such as The Cloud (owned by BskyB who already dominate satellite television and have expanded aggressively in home internet access) continually expanding their coverage by courting businesses with the promise of reliably outsourcing the provision of their wireless access (ONS 2014). However explaining the implications of these changes for the diffusion of mobile computing rests at least in part upon use case studies of the advantages they afford for users of the technology e.g. the ability to connect to the internet while commuting, the possibility for streaming video on mobile devices.

Who could object to a project that seeks to stop killer robots? The UK government apparently:

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

Some states already deploy defence systems – such as Israel’s Iron Dome and the US Phalanx and C-Ram – that are programmed to respond automatically to threats from incoming munitions. Work is also progressing on what is known as “automatic target recognition”.

The Foreign Office told the Guardian: “At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area.

“The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems.”

While the idea of autonomous weapons systems immediately summons up the prospect of something akin to a flash crash that does much more than destroy fictitious capital, it seems far from obvious to me that the prohibition of as yet unrealised technologies is necessarily the best way to ameliorate a putative future problem.