In a fascinating account of the private space programs of Jeff Bezos and Elon Musk, Christian Davenport explains how the Advanced Research Projects Agency (ARPA) has its origins in the geopolitics of the Cold War. From pg 59:

Eisenhower entered the room at 10: 31 a.m., and decided to get right to it, asking, “Do you have any questions for me?” The very first question he faced, from United Press International, was blunter than he was used to: “Mr. President, Russia has launched an Earth satellite. They also claim to have had a successful firing of an intercontinental ballistic missile, none of which this country has done. I ask you, sir, what are we going to do about it?” In the midst of the Cold War, the Soviets’ launches were seen as acts of aggression, expressions of military superiority. In a memo to the White House, C. D. Jackson, a former special assistant to the president who had served in the Office of Strategic Services, wrote that it was “an overwhelming important event—against our side.… This will be the first time they have achieved a big scientific jump on us, ostensibly for peaceful scientific purposes, yet with tremendous military overtones. Up to now, it has generally been the other way around.” If the Soviet Union could put a satellite into orbit, it’s hold the ultimate high ground and could, many feared, rain down missiles on American cities from space. Life magazine compared Sputnik to the shots fired at Lexington and Concord and urged the country to “respond as the Minutemen had done then.” Then Texas senator Lyndon Johnson fretted that “soon they will be dropping bombs on us from space like kids dropping rocks onto cars from freeway overpasses.”

This emerged from what the new agency called “traumatic experience of technological surprise” and constituted “a sort of elite special force within the Pentagon made of its best and brightest scientists and engineers” which cut across the entrenched barriers of the established services within the military. I would like to better understand the significance of DARPA in this context, as well as what it might tell us about how techno-nationalism might in future lead to the condensation of funding priorities into new agencies. As Davenport describes it on pg 128:

DARPA was tasked with looking into the future to envision what sorts of technologies the United States would need for the future of war: “To cast a javelin into the infinite spaces of the future” was its motto, a quote from Hungarian composer Franz Liszt. Walled off from the rest of the giant Pentagon bureaucracy so that it could innovate freely, the agency strove for nothing short of revolutionary advancement and “engineering alchemy” that would pierce the realm of science fiction. It had been given the authority to hire as it needed, as it sought “extraordinary individuals who are at the top of their fields and are hungry for the opportunity to push the limits of their disciplines.”

It has contributed to the development of a remarkable range of technologies, as detailed by Davenport on pg 128:

During Gise’s time, DARPA, then known as ARPA, was focused on preventing nuclear war and winning the space race. It even helped develop NASA’s Saturn V rocket, which took the Apollo astronauts to the moon. Since then, its reach and influence had broadened. In the late 1960s it started work on what would become ARPANET (Advanced Research Projects Agency Network), a network of computers in different geographic locations that became a precursor to the Internet.

Over the years, it helped develop all sorts of technological advancements that have transformed war, and, in some cases, everyday life. DARPA helped give birth to the Global Positioning System (GPS), stealth technology, cloud computing, early versions of artificial intelligence, and autonomous aerial vehicles. As early as the late 1970s, it was working on a “surrogate travel system” that created something like a Google Street View map of Aspen, Colorado. More recently, its work was focused on underwater drones, geckolike gloves designed to enable soldiers to climb walls, humanoid robots, bullets that can change direction, and a blood-cleansing “artificial spleen” to help treat sepsis.

What does this tell us about the future? Probably not very much in itself, though it is interesting to note that the DARPA budget is growing, from $2.97 billion in 2015 to a budget request of $3.44 billion for 2019. If anyone has suggestions of good places to read about developing trends in government funding of technology research, particularly in relation to national security, I’d like to read them. My point in writing this post here is not to lionise ARPA or call for the ‘disruption’ of the military but simply to observe the relationship between geopolitical concerns and technological innovation. If developments such as artificial intelligence, crypto-currency and platformization have increasingly vast geopolitical ramifications then what will this mean for the climate of state investment in emerging technologies? In many ways, the point is an obvious one but making it leaves us squarely within a terrain so mired in ideology (concerning free-markets and technology on the one hand, national security interests on the others) that the full significance of the observation will often be lost.

Notes for week 4 of the CPGJ Platform Capitalism Reading Group

I thought this short talk by danah boyd was really powerful in linking the utopian dreams of internet radicals to the anxieties and outcomes of work. Framing the future of work in terms of automation, as if that says everything which is needed to be said, obscures “the broader anxiety about identities that’s shaping both technology and work”. It’s important we reclaim this a focus of our analysis because people who can no longer “find their identity through their working environment” and realise they are in a situation “where institutions and information intermediaries no longer have their back” will not stand inertly as the rug is pulled out from beneath their feet. Their responses may be self-destructive (the opioid crisis), socially destructive (religious extremism) or socially transformational (activism). However it’s important to recognise how the activism through which people find this meaning might come to be destructive (and disruptive) in turn:

People often find themselves by engaging with others through collective action, but collective action isn’t always productive. Consider this in light of the broader conversation about media manipulation: for those who have grown up gaming, running a raid on America’s political establishment is thrilling. It’s exhilarating to game the media to say ridiculous things. Hacking the attention economy produces a rush. It doesn’t matter whether or not you memed the president into being if you believe you did. It doesn’t even matter if your comrades were foreign agents with a much darker agenda.

These people are responding to an environment which looks the way it does because of a past activism, intended to “create a public that was more broadly accessible, but ended up enabling a new wave of corrosive populism to take hold”. These people wants to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Platform capitalism was “birthed out of idealism” yet became something profoundly different, now “emblematic of corrosive neoliberalism and libertarianism run amok”. Early adopters saw themselves as marginal (“geeks, freaks, and queers”) and “turned to technology to build solidarity and feel less alone”. As boyd observes, it wasn’t so long ago that this utopianism seemed tenable to many,

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

I thought this talk setup questions rather than answered them. How do the cultural frames promulgated by technologists lock in the outcomes their innovations have made possible? How do we politicise technology in a way that recognises the ever-present possibility of corruption and abuse? How can we ensure technologists take responsibility for what they produce? Can the instinct to disrupt the status quo through technology take a positive form or should the lesson of the last couple of decades be that this will inevitably lead us to dark places? The talk also does something foundational to how I approach platform capitalism: it brings the agents back in without losing the focus on the technology.

A few weeks ago, I saw a collaborator of mine give a talk in which he outlined a position on social media which was roundly cast as anti-technological by those in the room i.e. reflecting an unsustainable blanket judgment of social media as a category of technology. I could see where they were coming from and my point in writing this isn’t to criticise them, only to observe the performativity of these judgments. His argument overlapped hugely with one I’ve made myself in public situations, unsurprising as it has emerged from a collaboration between the two of us. No one has ever accused me of being anti-technological when making it. Rather as if the property of being pro-technological and anti-technological is a matter of how an argument is performed, as well as how that performance is received, rather than part of the conceptual logic of the argument itself.

In her wonderful weekly newsletter, Audrey Watters writes about how these categories play out in media coverage of educational technology and how people respond to her work:

In the last few days, I’ve been reading Hilary Clinton’s What Happened and reflecting on it as an expression of a political centrism which I suspect is coming to an end. These self-defined ‘modernisers’ sought to adapt their respective political parties to what they saw as a new reality, necessitating that they be ‘change-makers’ while responding to change. The claims of the modernisers usually play out in two registers: the psephological and the epochal. The former is straight-forward as a case to adapt to shifts in the electorate themselves and their distribution across constituencies. These changes might be driven by other parties, necessitating adaptation to a changing political landscape. From loc 3544:

I came of age in an era when Republicans won election after election by peeling off formerly Democratic white working-class voters. Bill ran for President in 1992 determined to prove that Democrats could compete in blue-collar suburbs and rural small towns without giving up our values. By focusing on the economy, delivering results, and crafting compromises that defused hot-button issues such as crime and welfare, he became the first Democrat since World War II to win two full terms.

However, the epochal claims modernisers make are more ambiguous. As an empirical exercise, it is obvious that there are connections between social change and electoral change e.g. how post-industrialisation leads to a recomposition of the working class. There nonetheless tends to be a discursive separation between the two, in terms of how modernisers account for their strategy and tactics, which invites explanation. For instance, Tony Blair was prone to speaking in terms of epochal change, framing the new labour project in terms of globalisation and technology changing the landscape within which politics takes place. The influence of Anthony Giddens was undoubtedly key here, but this is nonetheless something which was drawn upon after the psephological case for new labour was already formulated.

This raises the question of the relationship between them: is the epochal language of modernisation merely a flowery idiom in which a basically psephological case is being made? I wonder if it serves a more subtle role, as switching between the two displaces the moment when political axioms confront empirical reality. If the psephological case is challenged, it’s possible to fall back on talk of modernity and globalisation. If the talk of modernity and globalisation is challenged, it’s possible to switch to a case framed in terms of electoral strategy. This ideology of moderation and empiricism postpones an encounter with its own empirical limitations, ensuring its adherents remain able to sustain their identity as pragmatists surrounded by fanatics.

In other words: the world ‘out there’ becomes oddly charged for modernisers, invoked continuously but in ways that distance themselves from it. It is a traumatic real which they avoid at all costs. It blinds them to their own role in creating the conditions to which they claim to be responding. Declining trust in politicians, disengagement from the political process and the subordination of politics to the media are presented as epochal shifts to which parties must respond strategically, as if this relationality plays no part in driving these political transformations. At one point in the book Clinton reminds me of Adorno, opining that “Solutions are going to matter again in politics” as she places her pragmatism in a bottle floating forward into an uncertain future (loc 3264). What Happened? The end of modernisation.

Over the next few years, I’ll be working on a collaborative project on trans- and post-humanism, building on the Centre for Social Ontology’s previous Social Morphogenesis series. My main contribution to this will be co-editing a volume, Strangers in a Familiar Land, with Doug Porpora and Colin Wight as well as exploring digital technology and what it means for human agency. 

This project is giving me a reason to read more widely than I have in a while, with a particular focus likely to be Andy Clark’s work in the philosophy of mind, speculative realism and continental philosophy of technology. There’s a lot of value to be found in the latter but one persistent point which frustrates me is what appears, to me at least, to be a fundamental confusion about the category of the human. This issue became clear to me when reading a thought provoking blog on Social Ecologies

Why must everything revolve back to a human relation – for-us? This human exceptionalism resides throughout the gamut of philosophical reflection from Plato to Derrida. One will ask as Bradley does: Why, in other words, can something that believes itself to be a critique of anthropologism still be seen as essentially anthropocentric? Can we step outside this temple of man and create a non-anthropocentric discourse that doesn’t find itself reduced to this human relation by some backdoor slippage of conceptuality? Are we condemned to remain human? What or who is this creature that for so long has created a utopian world against its inhuman core? If we were to be released from this prison of the human who or what would emerge? How alien and alienated am I to what I am? How monstrous am I?

https://socialecologies.wordpress.com/2017/07/17/we-were-never-human/

Unless I’ve entirely misunderstood a literature I’m still relatively new to, ‘technicity’ is an abstraction from material culture. It’s an abstraction which serves a purpose, allowing us to isolate the technical so as to inquire into its character, but the empirical referents of the term are technological artefacts i.e. a domain of material culture. In which case, it should not surprise us that the human constantly resurfaces, nor should we impure this tendency to a mysterious stickiness which ‘humanism’ as a doctrine possesses.

Material culture will always imply questions of the human because we are talking about artefacts built by, for, with and against human beings in social contexts which are similarly human saturated. The value in considering ‘technicity’ lies in opening out a space in which we can inquire into the emergent characteristics of the technical as a domain of material culture, considering the logic that guides it and how it can act back upon creators and the social contexts in which they create. But explaining material culture necessarily entails human-centred accounts, even if these have tended to problematically exclude or marginalise non-human elements. 

To suggest otherwise strikes me as straight-forward mystification, circumscribing large domains of social life as outside analysis, rather than offering a meaningful competing ‘inhuman’ explanation. It seems like a clear example of what Andrew Sayer calls a ‘PoMo flip’: responding to a problematic dichotomy by inverting it, rather than seeking to transcend the conceptual structure that creates the problem. In this case responding to an exclusion of non-human elements by seeking to exclude the human elements instead.

In 1988 Pierre Bourdieu chaired a commission reviewing the curriculum at the behest of the minister of national education. The scope of the review was broad, encompassing a revision of subjects taught in order to strengthen the coherence and unity of the curriculum as a whole. In order to inform this work, the commission early on formulated principles to guide their endeavour, each of which were then expanded into more substantive observations concerning their implications.

One of these stood out to me as of great contemporary relevance for the social sciences in the digital university. Their principle considers those “ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”. In other words, what are the elements of educational practice which are integral to it and how can we assure their succesful transmission in training? These include “fundamental ways of thinking” such as “deduction, experiment, and the historical approach, as well as reflective and critical thinking which should always be combined with the foregoing” and “the specific character of the experimental way of thinking”, “a resolute valuation of qualitative reasoning”, a clear recognition of the provisional nature of explanatory models” and “ongoing training in the practical work of research”. It extends this discussion to the technologies used in practice:

Finally, care must be taken to give major place to a whole series of techniques that, despite being tacitly required by all teaching, are rarely the object of methodical transmission: use of dictionaries and abbreviations, rhetoric of communication, establishment of files, creation of an index, use of records and data banks, preparation of a manuscript, documentary research, use of computerised instruments, interpretation of tables and graphs, etc.

Political Interventions: Social Science and Political Action, pg 175

This concern for the “technology of intellectual work” is one from which we could learn a lot, as well as the importance placed upon “rational working methods (such as how to choose between tasks imposed, or to distribute them in time)”. It maps nicely onto what C. Wright Mills described as intellectual craftsmanship. When we consider the technologies of scholarly production – things like notebooks, word processors, index cards, post it notes, print outs, diagrams and marginalia – our interest is in their use-in-intellectual-work. The technologies become something quite specific when bound up in intellectual activity:

But how is this file – which so far must seem to you more like a curious sort of ‘literary’ journal – used in intellectual production? The maintenance of such a file *is* intellectual production. It is a continually growing store of facts and ideas, from the most vague to the most finished.

The Sociological Imagination, pg 199-200

If we recognise this, we overcome the distinction between theory and practice. The distinction between ‘rational working methods’, ‘technology of intellectual work’ and ‘fundamental ways of thinking’ is overcome in scholarly craft. The role of the technology is crucial here: if we suppress or forget the technological, transmission of these practices is abstracted from their application, leaving their practical unfolding to be something which has to be discovered individually and privately (“ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”). But places for discussion of craft in this substantive sense have been the exception rather than the rule within the academy.

Perhaps social media is changing this. It is facilitating a recovery of technology, now finding itself as one of the first things social scientists discuss when they enter into dialogues through social networks and blogs. But it also facilitates what Pat Thompson has described as a feral doctoral pedagogy:

Doctoral researchers can now access a range of websites such as LitReviewHQ, PhD2Published and The Three Month Thesis youtube channel. They can read blogs written by researchers and academic developers e.g. Thesis Whisperer, Doctoral Writing SIG, Explorations of Style, and of course this one. They can synchronously chat on social media about research via general hashtags #phdchat #phdforum and #acwri, or discipline specific hashtags such as #twitterstorians or #socphd. They can buy webinars, coaching and courses in almost all aspects of doctoral research. Doctoral researchers are also themselves increasingly blogging about their own experiences and some are also offering advice to others. Much of this socially mediated DIY activity is international, cross-disciplinary and all day/all night.

https://patthomson.net/2014/06/16/are-we-heading-for-a-diy-phd/Doctoral researchers 

There can be problematic aspects to this. But when it’s valuable, it’s at the level of precisely the unity of thinking, technology and activity which the commission advocated. Social media is helping us recover the technology of intellectual work and it’s an extremely positive development for the social sciences.

One recurring theme in Brad Stone’s excellent The Upstarts is how technological assumptions encoded into legislation become focal points for conflicts with ‘disruptive’ companies. For instance, as loc 2348 illustrates, the novel dispatch system used by Uber complicated the distinction between taxis and livery cars:

Stressing that Uber cars were not hailed or even electronically hailed like taxis, the pair emphasized that Uber cars fit the legal definition of livery cars and were prearranged; it just so happened that the prearrangement occurred five minutes ahead of time instead of sixty.

But these distinctions also become ideologically loaded, with the antiquated assumptions effectively inviting us to sweep them away as part of our great disruptive project. This is something even Uber’s lawyers were moved by. From loc 2996:

She didn’t, pointing out that taxi regulations had been crafted decades before smartphones and internet ratings systems were invented. “I was personally always of the philosophy that the great companies, the PayPals of the world, don’t get scared by regulation,” she told me. “I never wanted to be the kind of lawyer that just said no.”

I’m currently reading Thomas Frank’s One Market Under God, a remarkably prescient book published in 2000 which has a lot of insight into contemporary cultures of technological evangelism. The book is concerned with what Frank sees as a transition in American life from a form of populism predicated on cultural reaction to one grounded in the worship of the market. It’s possible I’m primed to see this analysis as prescient because I’m working my way backwards through his books and One Market Under God contains the seeds of an analysis that he developed over the next sixteen years.

Nonetheless, I think we can learn much about our present circumstances by looking back to this transitional point in the roaring 90s which saw the origin of the rightward turn of social democratic parties, mass digitalisation and the first Silicon Valley gold-rush. What I’m increasingly preoccupied by is how these events were intimately connected. In other words: how do we place the ascendancy of the technology sector in social and economic history? To my surprise, Thomas Frank’s book actually addresses this question more straight-forwardly than any other I can think of apart from Platform Capitalism, though of course many accounts address these issues without systematically investigating them.

Despite the 1990s being hailed as an era of democratisation driven by a booming economy, Frank insists that we recognise that “The booming stock market of the nineties did not democratize wealth; it concentrated wealth” (loc 1973). But this chimera of continually ascending stock prices, grounded in the rampant speculation of the dot com boom, helped license an ideological transition that Frank describes on loc 2027:

both parties came around to this curious notion, imagining that we had somehow wandered into a sort of free-market magic kingdom, where ever-ascending stock prices could be relied upon to solve just about any social problem. Now we could have it all: We could slash away at the welfare state, hobble the unions, downsize the workforce, send the factories to Mexico—and no one would get hurt!

The ideological work involved in maintaining we had entered a new era of perpetual growth, beyond boom and bust, relied upon the mystique of the internet. It heralded the dawn of a new world, the end of old certainties and a constant horizon of possibility to be invoked in the face of those exhibiting an anachronistic scepticism. From loc 1659:

And yet, since the moment the Internet was noticed by the mainstream media in 1995, it has filled a single and exclusive position in political economy: a sort of cosmic affirmation of the principles of market populism. “Think of the Internet as an economic-freedom metaphor for our time,” wrote bull-market economist Lawrence Kudlow in August 1999.45 “The Internet empowers ordinary people and disempowers government.” And we were only too glad to do as Kudlow instructed us, to think of it in precisely this way. In fact, so closely did the Internet and market populism become linked in the public mind that whenever a pundit or journalist mentioned the Web, one braced oneself for some windy pontification about flexibility, or the infinite mobility of capital, or the total and unappealable obsolescence of labor, government, and any other enemy of the free-market enterprise.

Somewhat more prosaically, the companies of Silicon Valley became emblems of a new anti-elitism, with the old formalities of corporate life being replaced by a hierarchical ethos that lionised the entrepreneur for their authentic living, often expressed in ‘working hard and living hard’. The practice of paying stock options in lieu of wages became a cypher for shareholder democracy, an idea which was seized upon as legitimating what were in reality vicious attacks upon the security of labour. However as Frank points out on loc 2063, the reality of this in Silicon Valley was presented misleadingly as a sign of a brave new workplace culture rather than a familiar self-interest:

It may have been fun to imagine what these enchanted options could do in the service of economic democracy, but in point of fact their powers were almost always directed the other way. Options did not bring about some sort of “New Economy” egalitarianism; they were one of the greatest causes of the ever widening income gap. It was options that inflated the take-home pay of CEOs to a staggering 475 times what their average line-worker made; it was options that made downsizing, outsourcing, and union-busting so profitable. When options were given out to employees—a common enough practice in Silicon Valley by decade’s end—they often came in lieu of wages, thus permitting firms to conceal their payroll expenses and artificially inflate the price of their shares, pumping the bubble still further.17 Options were a tool of wealth concentration, a bridge straight to the nineteenth century.

What seems hugely important to me here is the recognition that the vast concentration of wealth that took place in the 1990s was deeply tied up, structurally and culturally, with the first wave of mass digitalisation brought about by the dot com bubble. The nature of that entanglement still isn’t as clear to me as I would like, but I’m increasingly confident in my claim that the analysis of digitalisation needs to be an integral part of the analysis of capitalism from the 1970s onwards.

As important as economic history is though, it’s crucial that we also understand the cultural dimensions to this process. What I really like about Thomas Frank is his commitment to taking business bullshit seriously. From loc 1787:

It is worth examining the way business talk about itself, the fantasies it spins, the role it writes for itself in our lives. It is important to pay attention when CEOs tell the world they would rather surf than pray, show up at work in Speedos rather than suits, hang out in Goa rather than Newport, listen to Stone Temple Pilots rather than Sibelius. It is not important, however, in the way they imagine it is, and for many Americans it is understandably difficult to care very much whether the guy who owns their company is a defender of family values or a rave kid. But culture isn’t set off from life in a realm all its own, and the culture of business in particular has massive consequences for the way the rest of us live.

Our contemporary discourse of ‘disruption’ and ‘innovation’ was nurtured in the business commentary of the late 1990s. By examining its origins, we can see the political context of this way of thinking and speaking about technology much more transparently than is the case if we examine contemporary instances of it. To close with a quote from Peter Schwartz, quoted on loc 1321:

Open, good. Closed, bad. Tattoo it on your forehead. Apply it to technology standards, to business strategies, to philosophies of life. It’s the winning concept for individuals, for nations, for the global community in the years ahead.

From The Monsters of Educational Technology, by Audrey Watters, loc 563:

Why are we building learning management systems? Why are we building computer-assisted instructional tech? Current computing technologies demand neither. Open practices don’t either. Rather, it’s a certain institutional culture and a certain set of business interests that do. What alternatives can we build? What can we imagine? Can we envision a future of learner agency, of human capacity, of equity, of civic responsibility, of openness for example?

This isn’t a new idea but I’ve rarely encountered it expressed so concisely:

The idea that individuals create wealth and that all governments do is come along and tax them is what Varoufakis calls “a preposterous reversal of the truth”.

“There is an amazing myth in our enterprise culture that wealth is created individually and then appropriated by the state to be distributed.

“We are conceptualising what is happening in society as if we are an archipelago of Robinson Crusoes, everybody on an island, creating our own thing individually and then a boat comes along and collects it and redistributes it. It’s not true. We are not individual producers, we produce things collectively.”

He points to an iPhone.

“This machine, inside of it, contains technologies that were created collectively. Not only through collaboration but a lot of public funding. Every single technology in there was created by government grant.”

http://www.theguardian.com/world/2016/may/25/yanis-varoufakis-australias-negative-gearing-is-scandalous

As you may know, executive coaching is an increasingly common phenomenon, particularly in some sectors like tech. This is how Eric Schmidt and his co-author describe the necessity of it in How Google Works loc 2440:

Whenever you watch a world-class athlete perform, you can be sure that there is a great coach behind her success. It’s not that the coach is better at playing the sport than the player, in fact that is almost never the case. But the coaches have a different skill: They can observe players in action and tell them how to be better. So why is it that in the business world coaches are so unusual? Are we all like Eric when he started at Google, so confident of ourselves that we can’t imagine someone helping us to be better? If so, this is a fallacy. 

As a business leader, you need a coach. The first ingredient of a successful coaching relationship is a student who is willing to listen and learn. Just like there are hard-to-coach athletes, there are hard-to-coach executives. But once they get past that initial reticence, they find there are always things to learn. Business coaches, like all coaches, are at heart teachers, and Bill Campbell, the best coach around, tells us he believes that management is a skill that is completely learnable.

This is something which suggests an obvious comparison to sports, not just in terms of the language used to describe this relationship. James Surowiecki, author of Wisdom of Crowds, draws out the connection in an interesting essay about the increasing competitive advantage accrued when performance is already at a top level:

The key part of the “performance revolution” in sports, then, is the story of how organizations, in a systematic way, set about making employees more effective and productive. This, as it happens, is something that other organizations started doing around the same timeline.

http://www.newyorker.com/magazine/2014/11/10/better-time

But can managerial performance really be measured in these terms? I don’t think it can and the belief to the contrary strikes me as a really interesting conceit, reflecting interestingly on the culture of managerialism: a kind of moral athleticism amongst prominent CEOs in which they aspire to be all that they can be

If we look at the same phenomenon further down the organisational ladder, we get to enforced performance reviews and sanctions ensuing from a failure to meet imposed expectations. We get to sleepless night and diffuse anxiety saturating into everyday life, all generated by concerns over ‘performance’. Coaching still exists but it becomes a very different phenomenon, as this interview I did about the sociology of work-life coaching suggests:

Coaching usually consists of individual or group meetings that continue for a few months. In the beginning of these meetings, a goal is set for the whole coaching process, and then the process continues with for example personality tests or exercises that the clients do in order to achieve the set goal. The coaches that I interviewed were often a bit vague in their answers when I asked about the specific practices of coaching. They would rather talk about ‘realising the inner potential of the individual’, though what this means specifically is rather unclear.

In general, it seems that coaching is for most part about discussing one’s hopes and realities with the coach and getting feedback for both the exercises and tests and for the plans that one has and the actions that one takes. The focus on ‘potential’ is telling of how coaching is quite oriented towards the future but at the same time relies on something that is thought to already exist within the self. As it happens, coaching concentrates on the individual. This means that all the work that is done in coaching centers on changing oneself in order to achieve the goals that one wants to achieve. 

This is reflected in the practices of coaching in the sense that they demand self-reflexivity and focus on getting to know oneself and reflecting for instance on one’s personality with the help of tests and exercises. In terms of employment, this means that questions that concern wider social structures or even organisational structures are left outside the scope of the things one needs to change. It thus begins to seem that change always starts within the individual self – and also that if there is a need for change it is the self that is at fault. In the case of unemployment then, for example, the structural reasons for unemployment are not accounted for but rather it is thought that if the individual just works hard enough to change themselves then they will also find employment – and if one is unemployed it just means that one has not yet found the ‘true self’ and the right goals that would solve the problem. In other words, if one does not find work, it is implied that this just means that one has not worked hard enough on improving oneself.

http://www.thesociologicalreview.com/information/blog/working-life-coaching-and-the-individualization-of-class.html

As a relational technology of the self, work coaching has to be read against the background of metricisation. It naturalises metrics and their attendant apparatus of control, scrutiny and intervention. The issue becomes a narrow one of ‘performance’ rather than one’s place over time within an organisation.

I’ve nonetheless become a bit obsessed with Bill Campbell. He turns up time and time again in business books about Silicon Valley. It also turns out he was actually a football coach originally:

 Son of a local school official, Campbell was born and raised in Homestead, Pennsylvania, near Pittsburgh. He attended Columbia University where he played football under coach Buff Donelli from 1959 to 1961. In his senior year, he was named to the All-Ivy Team. He graduated in 1962 with a bachelor’s degree in economics. In 1964, he obtained a master’s degree in education from Teachers College, Columbia University.[2] He was head coach of Columbia’s football team, the Columbia Lions from 1974 to 1979. Prior to this he was an assistant at Boston College for six years. He met his first wife, the former Roberta Spagnola, while she was the assistant dean in charge of Columbia’s undergraduate dormitories.

He joined J. Walter Thompson, the advertising agency, then Kodak where he rose to run Kodak’s European film business. Hired by John Sculley he became Apple’s VP of Marketing, then ran Apple’s Claris software division. When Sculley refused to spin Claris off into an independent company, Campbell and much of the Claris leadership left. Since 1997, when Steve Jobs returned to Apple, Campbell has served as a corporate director on Apple’s board of directors.

Campbell became CEO of GO Corporation, a startup pioneering a tablet computer operating system. After successfully selling GO Eo to AT&T Corporation in 1993, Campbell was CEO of Intuit from 1994 to 1998. Campbell announced that he would be retiring as the Chairman of the Board of Directors at Intuit starting January 2016.[3]

Campbell is an adviser to a number of technology companies, and was elected Chairman of the Board of Trustees at Columbia in 2005.

According to CNN Money, he is worth $200 million.[4]
https://en.m.wikipedia.org/wiki/William_Campbell_(business_executive)

To what extent is it a marker of prestige to be coached by Campbell? Is it still a status symbol for lesser executives to be coached by lesser coaches? Do these celebrity coaches and celebrity clients underwrite the demand elsewhere? Do all these coaches have top level business experience?

From Spam, by Finn Brunton, pg 89:

“Alan Solomon . . . a veteran antivirus researcher with a PhD in economics, critiqued the virus metaphor, suggesting that this medical/ biological metaphor of ‘virus’ is ‘too emotive’ . . . Instead, he proposed ‘weeds’ as a more appropriate concept for describing the threat of computer code.” 66 With “weeds” comes a very different culture of metaphors, of strong and weak ecosystems, each person cultivating their own garden every day to keep invasive species at bay. It is a much better metaphor for expressing one of the global computer network’s key points of weakness to “viral infection”: the monoculture of computers running the Microsoft Windows operating system, often poorly patched and unmaintained by users, making the network as vulnerable as the cloned Cavendish banana trees are to fungus attacks. Without overstating the influence of metaphor, it’s striking to consider how much that nomenclature might have changed the practices of security and programming around self-replicating computer code: computers as gardens rather than bodies, with diverse software populations to be tended and pruned by attentive and self-reliant users, potentially capable of weed resistance in their interdependence, with the professionals as agronomists, breeders, and exterminators rather than doctors at the cordon sanitaire. 67

An interesting case discussed on pg 85 of Unforbidden Pleasures, by Adam Phillips:

We may live in the aftermath of the myth of the Fall, and the even longer aftermath of the myth of Oedipus, but the first traffic lights were invented in the United States after the First World War. The traditional mutual accommodation travellers had been making to each other on their bikes and cars and carts was replaced by a set of lights. ‘Its purpose,’ the anthropologist James C. Scott writes, ‘was to prevent accidents by imposing an engineered scheme of coordination.’ Partly through pressure of numbers and the proliferation of vehicles, and partly through the new scientific and bureaucratic fantasies of efficiency and productivity, familiar forms of cooperation gave way to a new, technologically implemented, set of rules. People’s practical judgement was delegated to a red light. They had known when to stop, but now they were being told when to stop.

The institutionalisation of algorithmic evaluation should be seen as continuous with this much longer standing process of using technology to impose patterns upon, often entirely substituting for, situated judgement. It’s a new means towards a long standing end, rather than something radically discontinuous with what has come before.

In fact recognising this poses an apparent challenge for the argument I’ve been developing about the increasing ‘cognitive load’ of digital capitalism. Surely systems which support or substitute for reflexivity actually decrease the burden upon subjects? My argument will be that the experienced necessity for these systems develops in relation to their unintended consequences towards ratcheting up the burden in other ways. But it’s currently rather light on detail.

One of my major irritants is technological metaphors for subjectivity, not least of all because I slip into invoking them myself when I use terms like ‘cognitive load’. The underlying idea that ‘the brain is like a computer’, as well as the complex network of associated metaphors leading from it, frustrates me because it seems so obviously to be a case of fetishising our own productions. We see ourselves in what we have made and seek to understand ourselves through the characteristics of our artefacts. But as this extract from The Innovators, by Walter Isaacson, Loc 1049 illustrates, our understanding of our subjectivity furnished metaphors for the architecture of the machines which we subsequently use to understand ourselves:

Atanasoff initially considered building an analog device; his love of slide rules led him to try to devise a supersize version using long strips of film. But he realized that the film would have to be hundreds of yards long in order to solve linear algebraic equations accurately enough to suit his needs. He also built a contraption that could shape a mound of paraffin so that it could calculate a partial differential equation. The limitations of these analog devices caused him to focus instead on creating a digital version. The first problem he tackled was how to store numbers in a machine. He used the term memory to describe this feature: “At the time, I had only a cursory knowledge of the work of Babbage and so did not know he called the same concept ‘store.’ . . . I like his word, and perhaps if I had known, I would have adopted it; I like ‘memory,’ too, with its analogy to the brain.” 30

From Addiction By Design, by Natasha Dow Schüll, pg 19:

In a strategic response to growing suggestions that gambling machines are to some extent implicated in gambling addiction, the American Gaming Association released a 2010 white paper called “Demystifying Slot Machines.” Echoing the National Rifle Association’s (NRA) famous slogan— “Guns Don’t Kill People, People Kill People”— the paper asserts that “the problem is not in the products [players] abuse, but within the individuals.”  In this one- sided account, the machine is merely “the mechanism through which pre- existing psychological disturbances are expressed,” as a researcher puts it.  “What gaming critics fail to understand,” a reporter for Global Gaming Business sums up, is that “machines are simply inanimate objects.” 

From The Boy Kings, by Katherine Losse, pg 134:

That Sunday, after I’d slept off our long night, I logged in to Facebook to see an endless stream of videos that the boys had filmed at the club. In them, the boys were not chatting up or kissing girls they had met, as I had expected. Instead, they were performing an elaborate ritual only they would have the strange, cold vanity to invent, in which they would methodically chat up and reject girls that the bouncers had brought to their table. “Leave! You’re not pretty enough!” one of them seemed to say over the din of the club as he shooed the girls away in succession like so many servants. Even though I had been living in this boys’ world for almost two years, I was still a bit shocked. Their products ultimately reflected their real- life behavior. Instead of making a technology of understanding, we seemed sometimes to be making a technology of the opposite: pure, dehumanizing objectification. We were optimizing ways to judge and use and dispose of people, without having to consider their feelings, or that they had feelings at all.

The intruiging suggestion made by Losse is that these tech bros represent an epochal transformation in American alpha masculinity. She doesn’t really follow it up but I’m completely persuaded that tech bros, as well as bro culture in general, represent something of profound sociological significance.

From The Boy Kings, by Katherine Losse, pg 13:

I liked to listen to Mark’s discussion of the product philosophy and goals at these meetings, which were to me the most fascinating part of the job: what were we trying to do, with this fledgling Internet identity registration system? “I just want to create information flow,” he said in his still nearly adolescent voice, lips pursed forward as if jumping to the next word, and everyone would nod, all cogitating in their own way about what this meant. Mark’s idea of information flow, though vague, was also too vague to be disagreed with, and even if we came up with counter- instances to a model of pure information efficiency (for example, I wondered, do I want my Social Security number to flow freely?), we knew that we weren’t supposed to disagree. Mark was our leader, for better or worse. When the meetings ended he would say either “domination” or “revolution,” with a joking flourish of a fist, and everyone would laugh, nervously, but with a warm and almost chilling excitement. It was like we were being given a charter, by a boy younger than most of us, to take over the world and get paid to do it.