For a book of only 126 pages, Kill All Normies covers a remarkable amount of ground. Inevitably, the argument is underdeveloped at points and it perhaps offers less empirical detail about the alt-right than it promises, largely restricting its analysis to the study of (relatively) high profile cases and the inferences that can be made from them. But the underlying thesis is a provocative one, moving beyond the hyper-specificity of online culture and placing these politicised developments in an historical context.

Nagle’s argument is that the alt-right should be understood as an online politics of transgression, a cultural movement which has generated a political upheaval through a particular confluence of circumstances: internecine war with the ‘Tumblr left’, interaction with a more traditionally politicised far-right culture within online spaces and platform dynamics which have accelerated the development of this strange cultural mix. But at the root of it is an uncoupling of transgression from progressive politics. From pg 28:

Transgression has been embraced as a virtue within Western social liberalism ever since the 60s, typically applied today as it is in bell hooks’ Teaching to Transgress. So elevated has the virtue of transgression become in the criticism of art, argued Kieran Cashell, that contemporary art critics have been faced with a challenge: ‘either support transgression unconditionally or condemn the tendency and risk obsolescence amid suspicions of critical conservatism’ as the great art critic Robert Hughes often was. But, Cashell wrote, on the value placed upon transgression in contemporary art: ‘In the pursuit of the irrational, art has become negative, nasty and nihilistic.’ Literary critic Anthony Julius has also noted the resulting ‘unreflective contemporary endorsement of the transgressive’.

Those who claim that the new right-wing sensibility online today is just more of the same old right, undeserving of attention or differentiation, are wrong. Although it is constantly changing, in this important early stage of its appeal, its ability to assume the aesthetics of counterculture, transgression and nonconformity tells us many things about the nature of its appeal and about the liberal establishment it defines itself against. It has more in common with the 1968 left’s slogan ‘It is forbidden to forbid!’ than it does with anything most recognize as part of any traditionalist right.

Her claim is that the association of transgression with the left has been predominately contingent, reflecting a past context in which new social movements organised against a broader culture which participants found stifling. We can see this in the “ease with which the broader alt-right and alt-light milieu can use transgressive styles” (pg 28) and the power incipient within the “new transgressive rightist sensibility” which has now begun to make itself felt politically (pg 33). While the transgressive sensibility strikes me as an inarguable feature of some of the cultural forms being subsumed under the category of ‘alt-right’, it is by no means true of all, though perhaps this points to the limitations of the category. There’s a straight-forward empirical question here but one tied in fascinating ways to a much broader array of emerging issues in political theory, political philosophy and progressive politics.

What is a game? A standard definition is “a form of competitive activity or sport played according to rules” and this has been the working conception when I’ve encountered theoretical engagements with the notion of a game. But a recent symposium on eSports left me reflecting on how much more complex the ontology of games is when we consider contemporary video games, raising the question of whether digital games, particularly those played online, are something entirely different from their analogue predecessors.

Consider how a game like poker has developed over time. This family of card games has a contested history, with many potential predecessors being claimed. It has also has many variants, with rules that are stabilised through a range of artefacts, from ‘how to’ guides through to cultural representations and rule books in tournaments. As much as these artefacts exercise a normative influence over how poker is played, it’s predominant mode of transmission is interpersonal, with changes in the game liable to be piecemeal and taking place over long periods of time. In contrast, the rules of online digital games can be changed at a moment’s notice, with these being an important vector through which the relationship between the developer and the users unfolds. Every game has an infrastructure that supports it, even if it as minimal as conversations that have previously taken place between different groups that play the game. But the infrastructure of digital games played online allows for granular analysis of game events and immediate modification of the game rules. These might impede the reproduction of the game, for instance if too many rule changes alienate players, but the capacity to make these changes is something new and interesting.

There are also differences at the level of the virtual structure of the game: the latent order through which events unfold, driven by the rules of the game, but producing patterns which inevitably exceed what could be predicted from those rules alone. The complexity of digital games vastly exceeds that of analogue games, perhaps in a way which renders it impossible to render them formalistically in terms of branching probabilities. This isn’t always the case, particularly with older games which aren’t multiplayer. For instance I find it difficult to understand how something like this speed run of Super Mario 3 is possible unless there is, in principle, a ‘correct’ move to make at every point in the process, even if it doesn’t involve adherence to the formal rules of the game:

But more complex games, particularly those in which many players compete online, would seem to be a different phenomenon altogether. However is the challenge this poses ontological or epistemology? Is there no underlying (virtual) structure or is it simply too complex to be mapped? I find the former claim untenable because in principle it seems obvious to me that any particular instance of the game could be analysed, with sufficient data, in order to explain why it unfolded in the way they it did. This presupposes a structure in relation to which those outcomes become explicable. In which case, the problem is epistemic and perhaps suggests that other methods, perhaps data scientific ones, might be necessary. With enough  data could the contours of such a virtual game structure be fallibly traced out, even if it resists analysis through other means?

I wrote recently about a short article by Michael Burawoy in which he bemoaned the ascendancy of the spiralists within universities. These relentlessly ambitious new entrants to the university system see it as a theatre within which they can make themselves known, spiralling into the university before once more spiralling out of it to bigger and better things. As Burawoy describes them:

Spiralists enter the university from the outside with little knowledge of its inner workings. They don’t trust the local administration and instead cultivate, promote and protect each other through mutual recruitment, at the same time boosting their corporate-level incomes and contributing to administrative bloat. At UC Berkeley, senior managers have increased five-fold over the last 20 years, rising to 1,256 in 2014, almost equal to the number of faculty, which has barely increased over the same period (from 1,257 to 1,300). While the number of faculty has remained stagnant, student enrollment has increased by 20 percent.

Coming from the outside and concerned more about their future, spiralists are in the business of promoting their image — Dirks employed a firm to do just that at a cost of $200,000 to campus. Branding takes priority over ethics. This last year we have witnessed the cover up of sexual harassment by prominent faculty and administrators and the exoneration of punitive football coaching that led to the death of a football player and a $4.75 million civil suit — all designed to protect the Berkeley brand.

While he appeared to be using ‘spiralist’ in a way that was as much rhetorical as anything else, I’ve had the concept stuck in my mind since then and firmly believe it’s a potentially powerful way of conceptualising a particular form of biographical trajectory within organisations. I just encountered another example of spiralists at work in The Gospel of Self: How Jesus Joined the GOP, a reflective confessional written by one of the leading figures in the creation of modern televangelism in the United States. On loc 2196-2214 he bemoans the ascent of the spiralists in American television:

Of all the things that the press obscures in the gathering and reporting of news, this career self-interest bothers me most. Many, if not most, of the reporting staff at any local news operation don’t really want to be there. Each TV station is viewed as a stepping-stone to a bigger market, and so many enter through the front door with one foot already out the back. Their work in the smaller market includes the strong motivation to do highly flamboyant pieces for their résumé tape that will quickly grab the attention of a “more important” news director elsewhere. It is why the farm system for local TV news is corrupt. The business is almost entirely self-centered and self-driven.

Where else can we see the spiralists at work? If we take a ‘spiralist’ to be a new entrant to an organisation who has immediate and practical designs on moving upwards and/or outwards – as opposed to merely harbouring future ambitions, without formulating plans about how to achieve them through immediate action – it looks as if the spiralists are everywhere under present circumstances.

In his memoir Hinterland, the former Labour Minister and acclaimed diarist Chris Mullin reflects on the cult of youth in British politics. This was manifested in the bright young things, lacking experience outside of politics and with little non-instrumental participation within it, coming to dominate the parties. But it was most striking in the leadership itself, with all three main party leaders in the 2010 general election having been elected to that post within five years of entering Parliament.

Fast forward to 2017. The position of 68 year old Jeremy Corbyn as Labour leader is unassailable. 74 year old Vince Cable was elected unopposed, with competitors having been vanquished within a weakened party. Many see 68 year old David Davis as the most likely successor to Theresa May once she is inevitably disposed as Conservative leader. We can see a similar trend in US where the 71 year old Donald Trump will possibly face the 75 year old Bernie Sanders in the next election.

What is going on here? It’s possible to read too much into the contingent outgrowths of messy intra-party processes. But there is prima facie evidence that a cult of youth is rapidly being replaced by a cult of age, at least as far as Anglo-American political leadership is concerned.

One of the key points of disagreement between Object-Orientated Philosophy (OOP) and Critical Realism (CR) rests on the epistemic status of the object. While OOP and CR are in agreement that, as Harman puts it on pg 2-3 of his Immaterialism, objects should be treated as a “surplus exceeding its relations, quality, and actions”, CR takes a more optimistic view of the epistemological challenge posed by this surplus.

The key issue concerns the potentiality of objects. From Harman’s perspective, CR’s concern for casual power still constitutes a form of reduction. It’s an improvement on reducing objects to their effects. But, as he writes on pg 52, it’s still reducing objects to their potential effects:

Yet this purported advance still assumes that at the end of the day, nothing matters aside from what sort of impact a thing has or might eventually have on its surrounding. This risks obscuring our view of objects in a number of ways, which not only poses an ontological problem, but has methodological consequences as well.

I maintain that some of these methodological consequences can be avoided through a sophisticated account of how those casual powers are activated. In this way, the category of ‘effects an object might have in future’ always involves reference to a variable context, raising issues of how the features of an object and the features of a context combine to produce effects.

I’m nonetheless taking his challenge seriously. I’d earlier seen his account of objects as unduly pessimistic on an epistemic level: underestimating our capacity for knowledge of the parts, their relational organisation, their ensuing qualities, their ensuing powers and how these might be expressed in different contexts. But I increasingly realise that the CR formulation I’m so used to using, ‘properties and powers’, reflects a much clearer understanding of the properties than the powers. I think the former is often subordinated to the latter, such that properties are those features of objects we invoke in order to explain their causal powers. There’s a depth to the ‘surplus’ of objects which I realise I hadn’t previously grasped, even if I’m still not entirely certain about Harman’s account of it.

Reading Immaterialism by Graham Harman, I’m struck by the overlap between his account of ‘duomining’ and Margaret Archer’s critique of conflation. As he writes on pg 27-28,

“If we reduce an object downward to its pieces, we cannot explain emergence; if we reduce it upwards to its effects, we cannot explain change.”

While Archer’s argument is made in terms of the structure/agency problem, it can easily be recast in terms of structure alone. If we reduce social structure to the individuals who comprise it (alongside other material elements, which Archer is less sensitive to), we cannot explain how certain arrangements of people and things assume characteristics which the same ‘pieces’ lack in other arrangements (upwards conflation). If we focus solely on the effects of social structure, identifying how it constrains and enables individuals, we cannot explain how that structure might itself undergo change because it is the only causal power we admit (downwards conflation).

However this is only an overlap, as Archer and Harman’s arguments about modes of reduction are made for different reasons and they later diverge. Archer is concerned with the analytical temptations which inhere in the structure/agency problem that social science invariably confronts, even when it attempts to suppress it through various means. In contrast, Harman is concerned with ‘undermining’ and ‘overmining’ as two fundamental forms of knowledge which cannot be avoided: “what a thing is made of” (undermining) and “what a thing does” (overmining) (pg 28). Archer is concerned with a denial of relationality, as well as its temporal unfolding, with downwards and upwards conflation charged with suppressing the interplay over time between the different kinds of entities which make up the social word. Harman is concerned with the denial of objects as such, reducing their reality to the parts and their effects, losing a grip on the entity which is composed of these parts and capable of these effects without being reducible to either.

Both approaches explore a tension between the analytical and the ontological. Harman’s notion of overmining, which I found much less straightforward to grasp than his notion of undermining, identifies its roots in the tendency to treat objects as mysterious and unknowable in themselves. An ontological claim licenses an analytical one, as the analyst focuses upon the effects of objects as something epistemically tractable in contrast to the objects themselves. Even if they continue to recognise the reality of the object, it is a notional recognition which doesn’t enter into their analysis. This is something Harman addresses explicitly on pg 28:

After all, any claim that a thing is convertible into knowledge cannot account for the obvious and permanent difference between a thing and knowledge of it: if we had perfect mathematised knowledge of a dog, this knowledge would still not be a dog. It will be said that this is a “straw man” argument, since philosophers are obviously aware that knowledge is different from its object. Yet it is not a question of whether philosophers are personally “aware” of this, but of whether their philosophies sufficiently account for it.”

To which we might add: ‘and whether they incline social scientists drawing on their ideas to factor this awareness into their explanations’. This interface between the ontological and the analytical one is one that has long fascinated me: how does theory constrain and enable the explanations which enter into social inquiry? What other forms of ‘conceptual slippage’ can we identify as ontological claims contribute to social analysis?

There’s a disturbing snippet in Naomi Klein’s latest book, No Is Not Enough, discussing the growing market for disaster-preparation amongst well-heeled elites. While it’s possible there’s a large element of conspicuous consumption at work here, amongst people who have more disposable income than things they can buy with it, it nonetheless makes for disturbing reading. From loc 177-178:

These days, luxury real estate developments in New York have begun marketing exclusive private disaster amenities to would-be residents—everything from emergency lighting to private water pumps and generators to thirteen-foot floodgates. One Manhattan condominium boasts of its watertight utility rooms sealed “submarine-style,” in case another Superstorm Sandy hits the coast. Trump’s golf courses are trying to prepare too. In Ireland, Trump International Golf Links and Hotel applied to build a two-mile-long, thirteen-foot wall to protect the coastal property from rising seas and increasingly dangerous storms. Evan Osnos recently reported in the New Yorker that, in Silicon Valley and on Wall Street, the more serious high-end survivalists are hedging against climate disruption and social collapse by buying space in custom-built underground bunkers in Kansas (protected by heavily armed mercenaries) and building escape homes on high ground in New Zealand. It goes without saying that you need your own private jet to get there—the ultimate Green Zone. At the ultra-extreme end of this trend is PayPal billionaire Peter Thiel, a major Trump donor and member of his transition team. Thiel underwrote an initiative called the Seasteading Institute, cofounded by Patri Friedman (grandson of Milton) in 2008. The goal of Seasteading is for wealthy people to eventually secede into fully independent nation-states, floating in the open ocean—protected from sea-level rise and fully self-sufficient. Anybody who doesn’t like being taxed or regulated will simply be able to, as the movement’s manifesto states, “vote with your boat.” Thiel recently has appeared to lose interest in the project, saying that the logistics of building floating nation-states were “not quite feasible,” but it continues.

Towards the end of Kill All Normies, Angela Nagle discusses the chilling effect liable to ensue from the online harassment which journalists critical of the alt-right often now find themselves subject to. From pg 118:

Multiple journalists and citizens have described in horrifying detail the attacks and threats against those who criticize Trump or figures of the online Trumpian right, especially if the critic is female, black or Jewish, but also if they’re a ‘cuckservative’. They now have the ability to send thousands of the most obsessed, unhinged and angry people on the Internet after someone if they dare to speak against the president or his prominent alt-light and alt-right fans. Although the mainstream media is still quite anti-Trump, it would be naïve to think this isn’t going to result in a chilling of critical thought and speech in the coming years, as fewer and fewer may have the stomach for it.

Perhaps I’m being a pedant but I found myself frustrated by the phrase “ability to send” here. I’m not denying this possibility, in fact I’m fascinated by what I’ve come to think of as ‘shadow mobilisation‘, but it’s not obvious to me this is what happens here. There clearly isn’t anything approaching a command-and-control dynamic, something which I think Nagle wouldn’t dispute, hence we need to be careful about how we characterise the co-ordination and coercion which operates in different cases. I’ve rarely encountered an issue for which empirical research seemed so politically urgent.

In an important essay earlier this year, Jan-Werner Müller identifies a dangerous tendency for leftist critics to take the claims of right-populist demagogues at face value. Suddenly vindicated in their struggle with the ‘third way’ that has dominated the centre-left, the claims of nascent populists to speak for a ‘left behind’ majority, created by the neoliberalism which has consumed mainstream social democratic parties, has imbued many leftists with a newfound self-confidence.

This risks simplifying events with a complex array of causes, like the vote for Brexit and Trump’s election, imputing them to the quasi-magical capacity of populists to speak directly to the people. In doing so, it hinders the detailed analysis of these events which we so urgently need: see for instance this important essay by Mike Davis which discusses the American conservative movement’s massive investment in political infrastructure across every state in the country.

However it also lends credence to the populist right, supporting claims of speaking for those left behind which belie the naked class hatred which some of these figures exhibited in the recent past. This is what Angela Nagle argues in her important book Kill All Normies. From pg 101:

Ann Coulter had long drawn upon the elite fear of the hysterical and easily led crowd. In her book Demonic: How the Liberal Mob is Endangering America explaining how ‘the liberal mob is destroying America’ she drew upon Gustave LeBon, the misanthropists’ favorite theorist of the masses. Her writing on overbreeding, overcrowding swarms of immigrants is a direct continuation of this theme, which has been consistent in elite circles since the beginning of industrialized urbanized mass society, first applied to their multiplying native proletariat and later to new waves of immigrants. Before the ‘ordinary people’ narrative became suddenly ubiquitous on the new online right after the election results, Milo could be seen in photo shoots wearing a ‘Stop Being Poor’ T-shirt, a quote from the heiress Paris Hilton, one of his idols. After the election results he was giving talks about the white working class. The hard alt-right had also rejected the idea that the masses were their naturally traditionalist allies any longer, as the conservative establishment had typically believed. Instead, they had argued that the great mass of society had been tainted and indoctrinated by liberal feminist multiculturalism, and were close to beyond redemption. It was no longer ‘five minutes to midnight’ as the anti-immigration right had long claimed but well past midnight. While the Trumpians are busy quickly rewriting history, it is important to remember that behind the ‘populist’ president, the rhetoric of his young online far-right vanguard had long been characterized by an extreme subcultural snobbishness toward the masses and mass culture.

I wonder if Graham Turner’s distinction between the demotic and the democratic, made in the context of reality television, might be useful here. One could be said to involve foregrounding ‘the people’ as an imagined construct, the other involves empowering people as a social reality. The populist right is demotic, not democratic. This is what the leftist critique of mainstream social democracy, which I’m otherwise entirely in agreement with, risks obscuring.

In Immaterialism, Graham Harman offers a provocative critique of Latour’s social theory, praising Actor-Network Theory as “the most important philosophical method to emerge since phenomenology in 1900” (pg. 1) while also regarding its account of objects as philosophically deficient. While he accepts the ANT thesis that objects mediate human relations, something which chips away at the pervasive anthropocentrism of social theory, it nonetheless reinforces a human-centric world view in a subtle and interesting way. From pg 6:

To say that objects mediate relations is to make the crucial point that unlike herds of animals, human society is massively stabilized by such nonhuman objects as brick walls, barbed wire, wedding rings, ranks, titles, coins, clothing, tattoos, medallions, and diplomas (Latour 1996). What this still misses is that the vast majority of relations in the universe do not involve human beings, those obscure inhabitants of an average-sized planet near a middling sun, one of 100 billion stars near the fringe of an undistinguished galaxy among at least 100 billion others.

The commitment of ANT to defining actors through actions, itself understood in terms of effects on other actors, “allows objects no surplus of reality beyond whatever they modify, transform, perturb, or create” (pg. 10). Without this surplus, Harman questions how it can be possible for them to change. It is only when we recognise “an object is more than its components” and “less than its current actions” that its capacity to do otherwise becomes conceivable (pg. 11). Exactly what the surplus is, as well as how it underwrites this potentiality, might vary. As Harman notes of himself on pg 11:

The author Harman who currently types these words in the University of Florida Library while wearing a black sweater is far too specific to be the Harman who will leave Florida next Sunday and can remove the sweater whenever he pleases.

These features of the object which aren’t exhausted in its present actions are what account for its future capacities. If my specificity is exhausted in my writing of this blog post, it becomes mysterious how I cooked dinner or planned a trip earlier. There are the facts of these other actions but myself, as a unifying nexus in which these properties and powers converge, becomes emptied out into a frantic existence of constant process.

I couldn’t agree more with Harman’s claim that every object should be considered “as a surplus exceeding its relations, qualities, and actions” (pg. 3-4). Where I part company is with his epistemic pessimism. From pg 17-18:

And whereas naive realism thinks that reality exists outside the mind and we can know it, object-orientated realism holds that reality exists outside the mind and we cannot know it. Therefore, we gain access to it only by indirect, allusive, or vicarious means. Nor does reality exist only “outside the mind,” as if humans were the only entities with an outside. Instead, reality exists as a surplus even beyond the causal interactions of dust and raindrops, never fully expressed in the world of inanimate relations any more than in the human sphere.

This leaves me preoccupied by variance. My issue is not with the claim itself, as much as with it being framed in a way which makes it hard to unpack how this might vary between objects and contexts. How much surplus remains when we consider a given action? It depends on the action, the actor and the context. I don’t for a second believe this can be reduced to calculus but I nonetheless maintain there are differences of degree. I’m not convinced that the surplus of objects is quite as epistemically intractable as Harman makes it sound.

In the last few years, I’ve become interested in what I think of as shadow mobilisation: assembling people under false pretences and/or in a way intended to create a misleading impressions of the mobilisation. This is often framed in terms of astroturfing – fake grass roots – however it appears to me to extend beyond this. It would be a mistake to see it as a new thing but it might be out present conditions are making it easier and more likely.

It implies a relationship between the instigators and those mobilised, either through manipulation or reimbursement, which is fundamentally asymmetrical. One group has the capacity to plan, enact and reflect on these mobilisations while the other is a mereaggregate, induced to action on an individual-by-individual basis, furthering an agenda which might cohere with their own individual concerns but has no basis in collective concerns. In this sense, shadow mobilisations are a facimale of collectivity. 

If we accept the adequacy of this concept, it raises many questions. Foremost amongst them though is how widespread such shadow mobilisations are, as well as the conditions which facilitate this. I’ve come across examples in many sectors and I wish I’d been recording these systematically. The most recent comes in Anna Minton’s Big Capital, an illuminating study of how global capital is transforming London. From loc 1281-1297:

In a House of Commons debate in 2013, Labour MP Thomas Docherty, a former lobbyist, shared with Parliament some of the techniques of his former colleagues, recounting stories of lobbyists being planted in public meetings to heckle people who opposed their clients’ schemes. His stories chime with a wealth of anecdotal evidence of dirty tricks, including fake letter-writing campaigns and even actors attending planning meetings. Martyn, a film maker from Brighton, described to me how he had been offered ‘cash in brown envelopes’ to attend a planning meeting and pose as a supporter of Frank Gehry’s controversial plans for an iconic new development of 750 luxury apartments on the seafront. He remembers how ‘at least five of us’ from the drama school where he was studying were approached by an events company and asked if they’d like to participate. ‘We were told to go there and shout down the local opposition to the development. A couple of people were pointed out to us –residents, leaders of the local opposition –and we were told to be louder than them and be positive about the development. We were paid on exit, cash in hand, I think it was £50 or £100. I was there and I’m not proud of it. It is something that horrifies me,’ he said. 36 In Parliament, Docherty described dirty tricks as ‘utterly unacceptable’, although ‘not a crime’.

While each particular case of this manipulation of the planning process occurs on a small scale, it reflects an asymmetry we can see in other cases of shadow mobilisation. Residents who coordinate their action, potentially constituting an organised collective in the process, confront organisations which deploy their resources towards drowning this nascent collectivity through a shadow mobilisation. As Minton points out, such activity sometimes occurs alongside organised harassment, suggesting the ethical climate in which shadow mobilisation is seen as a viable strategy by those pursuing private profit.

On pg 102 of Jonathan Taplin’s Move Fast and Break Things, he highlights email exchanges between YouTube’s founders, released in a court case, which suggest the invocation of ‘user generated content’ might be a matter of branding rather than a meaningful growth strategy for social media platforms:

In another email exchange from 2005, when full-length movies were being posted on YouTube, Steve Chen, a cofounder of the company, wrote to his colleagues Hurley and Jawed Karim, “Steal it!,” and Chad Hurley responded: “Hmm, steal the movies?” Steve Chen replied: “We have to keep in mind that we need to attract traffic. How much traffic will we get from personal videos? Remember, the only reason why our traffic surged was due to a video of this type…. viral videos will tend to be THOSE type of videos.”

Much critical literature has focused on how social media platforms ossify existing hierarchies and establish new ones. It is too easy to see this as an unexpected consequence of a new social infrastructure, as opposed to an outcome which was knowingly designed in from the start.

An interesting snippet on pg 164 of Jonathan Taplin’s Move Fast and Break Things suggests a metric of content density which could be extremely interesting to explore:

Digiday looked at the race for what some are calling peak content. What it found was that in 2010 the New York Times, with 1,100 people employed in the newsroom, created 350 pieces of original content per day and attracted 17.4 million page views per day. By contrast, the Huffington Post, with 532 people employed, posted 1,200 pieces of content per day (most of it created by third-party sites) and 400 blog entries (mostly unpaid), receiving 43.4 million page views per day. One can understand why the future of original journalism is threatened.

This quantitative metric raises questions which invite qualitative analysis e.g. to what extent does an increase in content density (less staff producing more content) correlate with content being shorter, derivative and shallow? Are there cultural producers where this isn’t the case? What are the conditions which counteract this seemingly inevitable consequence of asking people to produce more with less?

One of the most pressing issues we confront when analysing the digital economy is a pronounced tendency towards oligopoly which makes a lie of an earlier generation’s utopian embrace of the Internet as a sphere of free competition and a driver of disintermediation. There are important lessons we can learn from platform studies about the reasons for this, concerning the architecture of platforms and the logic of their growth. But it’s important we don’t lose sight of how these dynamics are reliant upon existing legal and economic processes which predate the ‘digital revolution’. As Jonathan Taplin points out in Move Fast and Break Things, their competitive advantage was reliant upon a specific regulatory environment that was far from inevitable. From pg 79:

The economist Dean Baker has estimated that Amazon’s tax-free status amounted to a $ 20 billion tax savings to Bezos’s business. Baker notes, “In a state like New York, where combined state and local sales taxes average over 8.0 percent, Amazon could charge a price that was 1.0 percent below its brick and mortar competition, and still have an additional profit of 7 percent on everything it sold. That is a huge deal in an industry where profits are often just 2–3 percent of revenue.” Bezos, eager to preserve this subsidy, went to work in Washington, DC, and got Republican congressman Christopher Cox and Democratic senator Ron Wyden to author the Internet Tax Freedom Act. The bill passed and was signed by President Bill Clinton on October 21, 1998. Although not barring states from imposing sales taxes on ecommerce, it does prevent any government body from imposing Internet-specific taxes.

This is only one example. An adequate understanding of the digital economy requires that we identify the regulatory environments within which each category of tech firm operates and how this has contributed to their thriving or  struggling. When we combine this institutional analysis with platform dynamics, we can begin to account for the level of market concentration which Taplin summarises on pg 119-120:

In antitrust law, an HHI score —according to the Herfindahl-Hirschman Index, a commonly accepted measure of market concentration —is calculated by squaring the market share of each firm competing in a given market and then adding the resulting numbers. The antitrust agencies generally consider markets in which the HHI is between 1,500 and 2,500 to be moderately concentrated; markets in which the HHI is in excess of 2,500 are highly concentrated. The HHI in the Internet search market is 7,402. Off the charts.

He goes on to argue on pg 121-122 that this situation helps generate a cash glut with serious systemic consequences:

The problem is that the enormous productivity of these companies, coupled with their oligopolistic pricing, generates a huge and growing surplus of cash that goes beyond the capacity of the economy to absorb through the normal channels of consumption and investment. This is why Apple has $ 150 billion in cash on its balance sheet and Google has $ 75 billion. These enterprises cannot find sufficient opportunities to reinvest their cash because there is already overcapacity in many areas and because they are so productive that they are not creating new jobs and finding new consumers who might buy their products. As former treasury secretary Lawrence Summers has put it, “Lack of demand creates lack of supply.” Instead of making investments that could create new jobs, firms are now using their cash to buy back stock, which only increases economic inequality.

In other words: the inequality which digital capitalism generates is only contingently a function of technology.

This is a question which Zeynep Tufekci recalls in her Twitter and Tear Gas: The Power and Fragility of Networked Protest, posed to a group of young Turkish activists about 140journos, a crowdsourced citizen journalism project which they started. As she writes on pg 37:

In Turkey, like much of the Mediterranean, there is a tradition of slow, conversational drinking that is the opposite of a loud, hurried bar scene. Such conversational drinking often leads to discussions of politics. The stereotype of these all-night drinking locales in Turkey is that everyone has a plan to “save the nation” after the first glass of raki, a strong aniseed-based drink that is considered the national liquor (it is nearly identical to ouzo, the Greek national drink). In a previous era, an all-night drinking and talking session on the sorry state of news and the extent of censorship might have ended merely in a hangover the next day. Even if it might have gone further—for example, the people might have decided to try to start a journal or a newspaper—a lot of work, resources, and luck would have been required. However, unlike citizens in a previous era for whom frustration with mass-media bias had engendered little more than sour feelings the next day or an uncertain, lengthy, journey, these young men—only four of them—immediately conceived 140journos, a crowdsourced, citizen journalism network on Twitter.

The low costs involved facilitate a particular culture of project work, comfortable with sometimes vague aspirations and working out the details on the fly. But while Tufekci’s interest in this concerns activism, I wonder about the effects in other spheres. What about higher education for instance? What Dave Beer describes as ‘punk sociology’ shares much of the mentality which Tufekci describes. 

Over the next few years, I’ll be working on a collaborative project on trans- and post-humanism, building on the Centre for Social Ontology’s previous Social Morphogenesis series. My main contribution to this will be co-editing a volume, Strangers in a Familiar Land, with Doug Porpora and Colin Wight as well as exploring digital technology and what it means for human agency. 

This project is giving me a reason to read more widely than I have in a while, with a particular focus likely to be Andy Clark’s work in the philosophy of mind, speculative realism and continental philosophy of technology. There’s a lot of value to be found in the latter but one persistent point which frustrates me is what appears, to me at least, to be a fundamental confusion about the category of the human. This issue became clear to me when reading a thought provoking blog on Social Ecologies

Why must everything revolve back to a human relation – for-us? This human exceptionalism resides throughout the gamut of philosophical reflection from Plato to Derrida. One will ask as Bradley does: Why, in other words, can something that believes itself to be a critique of anthropologism still be seen as essentially anthropocentric? Can we step outside this temple of man and create a non-anthropocentric discourse that doesn’t find itself reduced to this human relation by some backdoor slippage of conceptuality? Are we condemned to remain human? What or who is this creature that for so long has created a utopian world against its inhuman core? If we were to be released from this prison of the human who or what would emerge? How alien and alienated am I to what I am? How monstrous am I?

https://socialecologies.wordpress.com/2017/07/17/we-were-never-human/

Unless I’ve entirely misunderstood a literature I’m still relatively new to, ‘technicity’ is an abstraction from material culture. It’s an abstraction which serves a purpose, allowing us to isolate the technical so as to inquire into its character, but the empirical referents of the term are technological artefacts i.e. a domain of material culture. In which case, it should not surprise us that the human constantly resurfaces, nor should we impure this tendency to a mysterious stickiness which ‘humanism’ as a doctrine possesses.

Material culture will always imply questions of the human because we are talking about artefacts built by, for, with and against human beings in social contexts which are similarly human saturated. The value in considering ‘technicity’ lies in opening out a space in which we can inquire into the emergent characteristics of the technical as a domain of material culture, considering the logic that guides it and how it can act back upon creators and the social contexts in which they create. But explaining material culture necessarily entails human-centred accounts, even if these have tended to problematically exclude or marginalise non-human elements. 

To suggest otherwise strikes me as straight-forward mystification, circumscribing large domains of social life as outside analysis, rather than offering a meaningful competing ‘inhuman’ explanation. It seems like a clear example of what Andrew Sayer calls a ‘PoMo flip’: responding to a problematic dichotomy by inverting it, rather than seeking to transcend the conceptual structure that creates the problem. In this case responding to an exclusion of non-human elements by seeking to exclude the human elements instead.

In the last few weeks, I’ve found myself using the term ‘playbook’ in a number of contexts. It’s typically defined as “a book containing a sports team’s strategies and plays, especially in American football” but I’m not quite sure where I picked up the phrase from as someone who hasn’t had much interest in sport for a long time. 

It’s been on my mind since reading Merchants of Doubt, an incisive historical exploration of a dangerous corporate tendency towards the deliberate cultivation of doubt in relation to pressing issues such as nuclear winter, acid rain, DDT and climate change. As I suggested in a post a couple of weeks ago, we can talk meaningfully of a ‘playbook for merchandising doubt’. In fact something akin to this was once explicitly published, as the authors of Merchants of Doubt summarise on pg 144-145:

Bad Science: A Resource Book was a how-to handbook for fact fighters. It contained over two hundred pages of snappy quotes and reprinted editorials, articles, and op-ed pieces that challenged the authority and integrity of science, building to a crescendo in the attack on the EPA’s work on secondhand smoke. It also included a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite. 42 Bad Science was a virtual self-help book for regulated industries, and it began with a set of emphatic sound-bite-sized “MESSAGES”:

1. Too often science is manipulated to fulfill a political agenda.

 2. Government agencies … betray the public trust by violating principles of good science in a desire to achieve a political goal. 

3. No agency is more guilty of adjusting science to support preconceived public policy prescriptions than the Environmental Protection Agency. 

4. Public policy decisions that are based on bad science impose enormous economic costs on all aspects of society. 

5. Like many studies before it, EPA’s recent report concerning environmental tobacco smoke allows political objectives to guide scientific research. 

6. Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.

Has anyone encountered comparable documents to this? The scale and organisation of doubt merchandising surely means they have been produced. But perhaps there’s a broader category to be explored here: the explicit articulation of surreptitious tactics

It highlights how coordination presupposes communication, suggesting that even the most duplicitous strategies of the powerful will tend to leave a paper trail. Where we see what appears to be organisation, even if the actors involved deny this, do we have reason to believe there may somewhere exist a ‘playbook’ or something akin to it? I would  tentatively define this as the formal articulation of a tactical repertoire that can be drawn upon in informal contests, even if the definition of these elements may be obscured behind a thick veneer of technocratic distance. By ‘informal contests’ I mean those where rules are not defined or a contest actually declared. The existence of a playbook reveals how advantages in organisational capacity might translate to a practical advantage in competition.

I’d be intrigued to know if these ruminations resonate with anyone, particularly those who might be able to furnish further examples