I found this review of Trump and the Media by Nicholas Carr in the LA Review of Books immensely thought-provoking. His focus is on the book’s historical contribution, contextualising the enthusiasm with which social media was greeted in terms of long term concerns about the centralisation of mass media. We can’t understand the ideal of a radically decentralised media without understanding the anxieties provoked by its initial centralisation:

Trump’s twitter stream may be without precedent, but the controversy surrounding social media’s political impact has a history stretching back nearly a century. During the 1930s, the spread of mass media was accompanied by the rise of fascism. To many observers at the time, the former helped explain the latter. By consolidating control over news and other information, radio networks, movie studios, and publishing houses enabled a single voice to address and even command the multitudes. The very structure of mass media seemed to reflect and reinforce the political structure of the authoritarian state.

It is against this backdrop that social scientists began to “imagine a decentralized, multimedia communication network that would encourage the development of a ‘democratic personality,’ providing a bulwark against fascist movements and their charismatic leaders”. Fred Turner traces these initial speculations from their originators, through the 1960s counterculture and the incipient computer industry, before it became an article of faith within present day Silicon Valley:

In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Carr frames Trump and the Media as “orbiting” around “the wreckage of techno-progressive orthodoxy”. These are the terms in which I’ve recently tried to analyse ‘fake news’ and ‘post-truth’, as solutionist framings by technological, media and political elites which circumscribe a much broader set of transformations and shape likely responses to them. It’s often struck me that these represent a peculiarly populist form of reasoning in their own right: isolating an incoming element which is seen to undermine a previously stable system, whether this is ‘populism’ or ‘social media’ itself. In the process, the claims of populists and social media firms are taken at face value, vastly inflating the power they have:

One contentious question is whether social media in general and Twitter in particular actually changed the outcome of the vote. Keith N. Hampton, of Michigan State University, finds “no evidence” that any of the widely acknowledged malignancies of social media, from fake news to filter bubbles, “worked in favor of a particular presidential candidate.” Drawing on exit polls, he shows that most demographic groups voted pretty much the same in 2016 as they had in the Obama-Romney race of 2012. The one group that exhibited a large and possibly decisive shift from the Democratic to the Republican candidate were white voters without college degrees. Yet these voters, surveys reveal, are also the least likely to spend a lot of time online or to be active on social media. It’s unfair to blame Twitter or Facebook for Trump’s victory, Hampton suggests, if the swing voters weren’t on Twitter or Facebook.

This is not to say that social media doesn’t exercise influence, only to dispute the assumption that it works through one-to-many communication. The media elites bemoaning the rise of fake news and filter bubbles in the dawning post-truth age are themselves complicit in the dynamic they see as being ‘out there’:

What Hampton overlooks are the indirect effects of social media, particularly its influence on press coverage and public attention. As the University of Oxford’s Josh Cowls and Ralph Schroeder write, Trump’s Twitter account may have been monitored by only a small portion of the public, but it was followed, religiously, by journalists, pundits, and policymakers. The novelty and frequent abrasiveness of the tweets — they broke all the rules of decorum for presidential campaigns — mesmerized the chattering class throughout the primaries and the general election campaign, fueling a frenzy of retweets, replies, and hashtags. Social media’s biggest echo chamber turned out to be the traditional media elite.

What this short review suggested to me is the necessity of revisiting basic concepts (such as centralisation, gatekeepers, publics and influence) in response to the wreckage of techno-progressive orthodoxy. We need a bleak social theory for bleak times and if it doesn’t begin by examining the assumptions inherited in core concepts, as well as their implications for making sense of the present conjuncture, it is unlikely to get very far.

I saw a wonderful exhibition this weekend, collecting work by Alex Prager combining photography and film in intricately staged hyper-real scenes. The collection that has been playing on mind since seeing it is Face In The Crowd. If you click on the screenshot below, it will take you to the website where you can see the work:

Screen Shot 2018-07-30 at 13.50.27.png

The accompanying notes described how these are “dynamic tableaus where individual characters are presented in equally sharp focus, seemingly lost in their own internal conversations”. It reminds me of Hannah Starkey’s work in its fascination with how interiority plays out in social scenes, showing how private experience nonetheless has a public existence.

However I found the staging of the scenes troubling, as much as I recognise the intention behind them. It feels like the relationality is washed out, as if collectivity is exhausted by the artefact of the social situation. There’s a strange emptiness between inner and outer, with interaction reduced to staging such that the bonds of social life appear as little more than fragile constraints.

Each of these scenes is a collage of individuals rather than a collective, creating images which are sociological in their intention but not in their enactment. Individuals are either lost in the reality of their own lives or looking forlornly through the artifice of shared reality, as is the case with the red-haired woman in the image above. It foregrounds that artifice but also inflates it, losing track of how it functions as a collective tissue which knits together individual lives in the mundane interactions throughout the day.

It is scaffolding which often fades into the background, facilitating the relationality which is lost in these scenes. It is a deliberately stilted vision of the social, hugely succesful in its staging and producing an aesthetic which I find immensely unsettling.

Why do people do what they do? It is a question at the heart of the human sciences but it is also one we ask in everyday life. However the way we ask it often tracks our prior feelings towards the people we ask it of. For instance, as Jana Bacevic has argued, many fail to grasp the agency of managers and consultants within the ‘neoliberal university’ and through doing so misdiagnose both the intentions of these managers and the system their actions are contributing to.

When people seem to embody systemic tendencies we are critical of, it inevitably slides into a disproval of the people themselves. They are reduced to vectors of these organisational processes, revealing a quotidian Althusserianism which is important to understand if we want to grasp how organisations and systems are imagined by participants within them.

There was an example I came across earlier today which made me think about the role of distance in this process. This is an observation which Richard Brooks makes on pg 14 of his Bean Counters: The Triumph of the Accountants and How They Broke Capitalism:

Few arrive with much sense of vocation or a passion for rooting out financial irregularity and making capitalism safe. They are motivated by good income prospects even for moderate performers, plus maybe a vague interest in the world of business. Many want to keep their options open, noticing the prevalence of qualified accountants at the top of the corporate world; one quarter of chief executives of the FTSE100 largest UK companies are chartered accountants.

The exercise of agency here is provisional and tentative. Rather than rapacious instrumentalists concerned only with maximising their income, we find people keen to keep their options open and seeking agreeable outcomes without hemming themselves in. It is similar to the claim made by Kevin Roose in his superb account of the everyday lives of young financiers on Wall Street:

As strange as it sounds, a big paycheck may not in fact be central to Wall Street’s allure for a certain cohort of young people. This possibility was explained to me several weeks before my Penn trip by a second-year Goldman Sachs analyst, who stopped me short when I posited that college students flock to Wall Street in order to cash in. “Money is part of it,” he said. “But mostly, they do it because it’s easy.” He proceeded to explain that by coming onto campus to recruit, by blitzing students with information and making the application process as simple as dropping a résumé into a box, by following up relentlessly and promising to inform applicants about job offers in the fall of their senior year—months before firms in most other industries—Wall Street banks had made themselves the obvious destinations for students at top-tier colleges who are confused about their careers, don’t want to lock themselves in to a narrow preprofessional track by going to law or medical school, and are looking to put off the big decisions for two years while they figure things out. Banks, in other words, have become extremely skilled at appealing to the anxieties of overachieving young people and inserting themselves as the solution to those worries. And the irony is that although we think of Wall Street as a risk-loving business, the recruiting process often appeals most to the terrified and insecure.

I’m not suggesting we should take people’s accounts of why they do what they do at face value. If we did, the space to be critical of power and hierarchy would soon collapse in the face of an endless succession of people with apparently good intentions of varying degrees of systematicity. But it does seem important that we also avoid taking our attributions of agency at face value, interrogating what we are imputing to people and the reasons why we might be imputing them.

I spent much of the recent Accelerated Academy talking about the limitations of the fast/slow dichotomy and my concern that the framing of our series entrenches it. To talk of the ‘accelerated academy’ implies there was once a slow(er) academy and hints that the pathologies we currently face could be overcome by reclaiming what has been lost. It is an account which invites us towards nostalgia, imagining a past which we seek to recover rather than analysing the potential for change we can find latent within our present circumstances. In fact, between myself and Filip, it seemed the fast/slow dichotomy was trashed so much that a few people seemed apologetic when they mentioned it with anything other than condemnation.

So should we dispense with them entirely? Barbara Adam offered a qualified defence of dichotomies, recognising their limitations but insisting on their value as tools to think with. This resonated with me a lot, as someone prone to finding dichotomies in my own thinking yet continually struggling against them. Dichotomies anchor a terrain, laying out a space in a way which help us locate ourselves within it. But they only provide a rough sketch of that space, leaving us disorientated if we retain them as our sole reference points rather than elucidating the territory and exploring its topography.

The problem with dichotomies is not so much their appearance as their persistence, their tendency to prove sticky and our ensuing difficulty in dispensing with them once they have served their original purpose. We shouldn’t banish dichotomies, as much as refuse to take them seriously past a certain point. They can be useful conversation starters and sharpening blocks for our conceptual tools. But if we mistakenly take them as a primary focus then they can fatally undermine our capacity to make sense of a world inevitably more complex than a simplistic opposition can possibly capture.

In the conclusion to their Envisioning Sociology, John Scott and Ray Bromley reflect on how the project of Patrick Geddes and the sociologists around him came to be forgotten, in spite of the influence they exercised in their own time. This lost tradition of classical British social theory was an energetic and multifaceted engagement with the changing world around them, drawn together in a powerful vision of a sociological movement which sought to reconstruct this world.

How this project failed and how they came to be forgotten within the discipline is a complex story. But one particularly interesting aspect is how the intellectual charisma of Geddes himself might have contributed to this, imbuing the emergent movement with characteristics which lent it dynamism in its own time but failed to equip it to reproduce itself in subsequent generations. From 4554-4569:

The circle was organized around Patrick Geddes as its inspirational and charismatic leader. This was clearly one of its strengths, as it provided the core set of ideas that went largely unchallenged among his followers. This structure was also, however, a source of weakness. Geddes’s charisma as a teacher attracted those who were seeking an answer to fundamental questions. His synoptic vision and the apparent completion of his theoretical system tended to ensure that his followers were immediately and absolutely committed to furthering his work. They believed they had discovered “the truth” and so felt an almost religious obligation to bring this truth to those who had not yet encountered it. They became disciples with a commitment to proselytize on behalf of the master and to take his words to the ignorant masses. As convinced believers, they felt that it was necessary only to bring these ideas to the attention of others for them to recognize and accept their truth. Argument and persuasion were felt to be unnecessary, given the “obviousness” of the ideas once stated. Hence, they emphasized didactic education rather than persuasive discussion. The members of the circle therefore felt no real need to enter into proper dialogue with advocates of other positions. Their absolute certainty—often perceived as arrogance—was viewed with suspicion by their intellectual rivals, who simply ignored what they had to say. Other sociologists felt alienated from the Geddes circle and refused to cooperate in any venture that they thought might be a mere pretense at cooperation designed to impose the Geddes viewpoint. Excluded from expanded professional activities, the Geddes circle became increasingly inward looking. Its members tended to overpromote the work of very minor members of the group, further undermining their credibility in the eyes of others.

I find it hard not to see echoes of these tendencies in critical realism. There’s a much broader lesson here about the dangers of intellectual leadership, as the characteristics which lead ‘schools’ to form can in turn undermine the longevity of their ideas. I’ve long been drawn to the idea of a social life of theory which would unify the conceptual evaluation of theoretical ideas and their sociological explanation as cultural forms. These are two sides of the same coin and going back to the lost traditions, examining the failed projects which one promised so much, helps us look at the contemporary landscape of social theory in a new way.

 

The network scientist Emmanuel Lazega studies collegiality and bureaucracy as ideal typical forms of social organisation which co-exist in a fluctuating balance within organisations. Collegiality involves actors recognising each other as autonomous, existing in relationship to each other and necessitating consensus as a preliminary for what will always be non-routine action. Bureaucracy merely requires interaction, being organised around hierarchy and impersonal relationships, operating through routine action. 

As I understand Lazega’s outlook, these modes of organisation always exist in tension because collegiality is a threat to bureaucracy, as the formation of collectivity between autonomous actors intrinsically carries the possibility of solidarity and subversion. What are otherwise bureaucratic organisation rely on residual collegiality, often organised into what Lazega describes as ‘pockets’, in order to perform non-routine tasks which necessitate creativity. However bureaucracy remains suspicious of collegiality, seeking to minimise its overall function and the autonomous character within the collegial  coordination which remains necessary within the organisation.

The ethnography of Dreamfields academy undertaken by Christy Kulz in her Factories for Learning offers a vivid account of strategies which bureaucracy adopts in its war against collegiality. From loc 1221:

A staple in most schools, the omission of a staff room was another design decision described by SMT members as a positive move to prevent factionalism and increase productivity. Mr Vine feels staff rooms are places ‘where staff go and hide out and try to avoid students’ and are ‘a breeding ground for negativity … where people get together and talk about others or moan’. Mr Davis thinks the lack of a staff room fits ‘the businesslike nature of the school’. Administrator Mr Fields feels private-sector businesses and Dreamfields share a similar work ethic: 

“There is no doubt that people at the school work very hard … it’s not a question of, well, you come here and you can relax for the first hour and have a cup of tea and have a long lunch break, which I think is probably still the case in some local authorities, but here people do work really hard. “

Eradicating the staff room symbolically severs Dreamfields from the perception that local authorities are unproductive spaces in comparison to private businesses, responding to narratives of public-sector failure. Staff taking a break or talking to one another are framed as troublesome activities eliminated by preventing congregation.

The teachers are only too aware of how this prevents them gathering together. As one describes, it is “very clever that we don’t have a staff room ’cause it means that people work harder then, and they can moan, but they moan less because there are not so many people gathered together, moaning together” (loc 1241) This ‘moaning together’ might otherwise be the coalescence of collectivity from which a challenge to the bureaucratic organisation of the school might ensue. The headteacher describes a similar concern to break up collectivities of children: “We do not have groups of more than six or seven congregating together. If you see large groups of children, you need to break them up so they do not cause silliness and mayhem” (loc 1241). They even breakup such congregations outside the school grounds. Such ‘silliness and mayhem’ is precisely what bureaucracy fears in collegiality and why it seeks to stamp it out.

The Concept Lab would meet on a weekly basis, usually for an hour unless there was logistical business to be undertaken concerning the future of the lab. Each meeting would revolve around a presentation from one member, detailing either:

  • A practical problem they have faced in their research, as well as a singular concept they have turned to in order to resolve or at least better understand the problem in question. The focus would be on the actual or hoped for application of the concept in the research process.
  • A new concept which they have encountered, to be introduced and placed in an intellectual context. If there is no immediate practical application for this concept, the onus would be on accounting for the enthusiasm the concept provokes in them. Why is this felt to be important? What might it bring to research practice at a later stage?
  • A new concept which they have developed, which would be introduced and contextualised in a similar manner to above. The focus would be on the claimed novelty of the concept, the circumstances in which it was developed and the potential uses to which it could be put.

The purpose of the Concept Lab would be to provide a forum in which participants account for their work with concepts, as well as facilitating the generation of a language within which to describe and analyse this work across intellectual and disciplinary boundaries. For this reason, it would be important for the pool of participants to be intellectual diversity and stable in their constitution. While allowing for the inevitable exigencies of working life, it would be expected that participants where possible attend all sessions for an agreed period of time. If the format was successful, participants would benefit as much from presentations by others as from the opportunity to present themselves.

This is a wonderful expression I just picked up from Machine, Platform, Crowd by Andrew McAfee and Erik Brynjolfsson. As they describe on pg 112-113, suitcase words jumble together multiple meanings in a way which renders questions more obscure than they would otherwise be:

Is generative-design software really “creative?” It’s a hard question because creativity is a prime example of what AI pioneer Marvin Minsky called a “suitcase word.” As he put it, “Most words we use to describe our minds (like ‘consciousness,’ ‘learning,’ or ‘memory’) are suitcase-like jumbles of different ideas.” We see just such a jumble in different definitions of creativity. The Oxford English Dictionary, for example, states that creativity is “the use of imagination or original ideas, especially in the production of an artistic work.”

In a lecture today I argued that our debates about the meaning of the human are prone to this, relying on contested terms without properly defining them. It’s when we confront suitcase words that social ontology becomes invaluable, offering us techniques for unpacking these terms and ensuring the debate proceeds in terms of the contents of the suitcase rather than the suitcase itself. If we are clear about this purpose then it invites us to undertake ontological reasoning in a focused way, orientated towards the clarification of questions through the delineation of entities and characteristics.

We often think of self-narrative as something self-grounding, reflecting the truth of a person even if that truth might change over the life course. If we take issue with this, we turn to the bare objective facts of someone’s life as a counterpoint to the unreliably subjective stories they tell. This oscillation misses the important relationship between the two, as subjective and objective weave together in forming us as the person we are as a continuous outcome of the life we are leading. As Ann Oakley writes on loc 3628 of Father and Daughter:

The stories of our lives have to be condensed and elliptical because otherwise they’re boring and the central themes get lost. We must convince ourselves as well as others of logic, linearity, evolutionary progress: it was like this, it must have been, because it’s such a good story.

These narrative imperatives reflect a gap between the reality of our lives and the stories we tell about them. They only emerge because there is a steady accumulation of potential facts, a piling up of individual elements which could be ordered in many different ways. What interests me is narrative as an interface, the point at which we struggle to make sense of who we are (and have been) through the accounts we give of ourselves, internally and to external others. 

The first of what seems likely to be many books about the June 2017 general election was released earlier this week. Betting the House, by Tim Ross and Tom McTague, tells the story of the election through contrasting accounts of the Conservative and Labour campaigns. There’s much more detail about the former, seemingly reflecting both the interests of the authors and the potential for access to political figures keen to settle scores after being involved in such a car crash.

One of the most interesting things is how clearly the failures of political research methods (viz polling and modelling) played a role in the outcome. On loc 1857 they describe how the speed with which sentiment changes within the electorate undermine expectations formed on the basis of political polling:

Pollsters take their raw research findings and then ‘weight’ them against the likelihood of different respondents to turn out on polling day and vote the way they say they will. It was these calculations that led to variations between the pollsters during the campaign. One of the key questions was how likely they believed young people who said they would vote Labour were to make it to a polling station and cast a ballot on 8 June. Then there was the speed of the changes in support for Corbyn. It was a fast-moving phenomenon which took off at a time when campaigning had to be stopped twice because of terrorism. The sharp tightening in the polls came largely after the launch of the Tory manifesto, which proved to be one of the most influential policy documents in recent election history. The effect of that manifesto, and the U-turn that followed, on the Conservative lead was like a gust of wind on a house of cards.

On loc 1874 they give an overview of the modelling activity which the parties engaged in, as well as the data sources upon which they drew to undertake this:

This is where the mysterious wizardry known as ‘modelling’ comes in. There is a wealth of information available to political parties –some free, much paid for –which they can use to identify the swing voters they are most likely to win over. These are the voters who decide elections. They may have voted a different way at the last election but ‘modelling’ helps parties understand which arguments will be most likely to persuade them. The exercise is highly complex, meaning it can go wrong. All mainstream parties model their potential voter types; it is simply sensible market research. They use a combination of data from Facebook, the electoral roll and the credit checking agency Experian. This information reveals what specific voters in specific target seats are like. Reliable polling evidence, broken down by age, social class, gender, education level and other factors, can then suggest how people with particular characteristics might vote.

The underlying assumption here is that past behaviour can be an adequate basis upon which to make predictions about future actions. In many cases, it can be, in spite of the many extrinsic factors which can impinge upon the replication of the past. However, if we are seeing an increase in the speed at which changes in the electorate occur then modelling is going to represent a decreasingly reliable basis upon which to assign limited resources within a campaign. This is something which, as the authors point out on loc 1899, the tactics of Momentum were not vulnerable to:

The one organisation which did not use micro-targeting was Momentum. They simply bombarded their target seats with activists in the hope of persuading people to vote for Corbyn. In doing so, they knocked on doors none of the main parties were bothering with. Even internal Tory modelling experts now question the value of what they were doing. ‘Maybe Corbyn’s plan to build a big groundswell of support, ignoring the seat-by-seat numbers etc., is the right way to go,’ says one. ‘How do you ever factor in for that? This is what happened with Trump, this is what happened with Brexit. People voted who you never expected to vote. How do you work out a way to tackle that?’

This is a complex issue which I’ve barely scratched the surface of. The core question is straight-forward though: are some political tactics more adaptable to intensified social change than others? What does this mean for broader questions of political strategy?

My relationship with the work of Zygmunt Bauman, Anthony Giddens, Richard Sennett and Ulrich Beck has been a complicated one. Discovering their work as an intellectually frustrated philosophy student led me to move sideways into a sociology department rather than starting a PhD in political philosophy. Their approach excited me, opening up the possibility that we could talk about the contemporary age in a way which captures the most intimate aspects of personal experience and connects them to sweeping accounts of historical change. However the further I went into sociology, the more sceptical I became about the capturing and the connecting these accounts claimed to do.

They have little empirical grounding in their own right, painting complex processes in seductively broad brush strokes, despite their pose of being attuned to the bleeding edge of social change. Much of my eventual PhD was animated by a conviction that the framework of ‘late modernity’ often doesn’t help social analysis, even sometimes hindering it, offering a series of intoxicating motifs for rendering empirical findings in thematic terms rather than offering any practical conceptual instruments for analysing them. This entire body of work has been persuasively diagnosed by Mike Savage as epochal theorising:

The social sciences, and especially sociology, abound with epochalist thinking (see generally Savage 2009). We are seen to have moved, variously, to a globalised, post-modern, neo-liberal, informationalised, cosmopolitan, (and so forth) world order. Such thinking saturates debates about social change and incites an almost constant agitation for detecting new kinds of epochal change and transformation which makes our contemporary times different from anything that comes before.

https://stratificationandculture.wordpress.com/2014/07/01/sociological-ruminations-on-piketty/

In an earlier article, Savage and Burrows describe this as a “kind of sociology which does not seek to define its expertise in terms of its empirical research skills, but in terms of its ability to provide an overview of a kind that is not intended to be tested by empirical research”.

The manner in which these accounts capture the intellectual attention space, at least under the peculiar epistemic condition of the accelerated academy, renders them much more problematic than would otherwise be the case. These bodies of work become crucial intellectual reference points which enjoy an influence that vastly exceeds their intellectual merit e.g the relatively recent Liquid Modernity has received 11,000+ citations, over four times more than the much older and vastly superior Legislators and Interpreters. They exercise a gravitational pull over the field of empirical research, even when they’re remarkably ill suited for this purpose.

But perhaps I’ve been too harsh. In this paper Simon Susen makes a casual remark that “One could hardly think of a more ambitious and timely challenge than the task of accounting for the distinctiveness of the contemporary age“. I realise that I agree with him, even if I continue to take issue with many of the attempts that have been made to do this. We deserve better accounts of the distinctiveness of the contemporary age, even if the conditions within which we work makes it difficult to develop them.

From pg 27 of Peter Sloterdijk’s The Art of Philosophy. 

Witnesses report that Socrates had the habit of “sinking” into thought, as if thinking involved a kind of trance or obsessive daydream. According to Xenophon, Socrates saw this as “concentrating the mind on itself” by breaking off contact with his environment and becoming “deaf to the most insistent address.” Once, during a military camp to which he was called up as part of his duty as an Athenian citizen, he is supposed to have stood still on the spot for twenty-four hours. All the while, he was lost in the inner activity that people around him regarded as ridiculous yet amazing, and perhaps even numinous.

What is a game? A standard definition is “a form of competitive activity or sport played according to rules” and this has been the working conception when I’ve encountered theoretical engagements with the notion of a game. But a recent symposium on eSports left me reflecting on how much more complex the ontology of games is when we consider contemporary video games, raising the question of whether digital games, particularly those played online, are something entirely different from their analogue predecessors.

Consider how a game like poker has developed over time. This family of card games has a contested history, with many potential predecessors being claimed. It has also has many variants, with rules that are stabilised through a range of artefacts, from ‘how to’ guides through to cultural representations and rule books in tournaments. As much as these artefacts exercise a normative influence over how poker is played, it’s predominant mode of transmission is interpersonal, with changes in the game liable to be piecemeal and taking place over long periods of time. In contrast, the rules of online digital games can be changed at a moment’s notice, with these being an important vector through which the relationship between the developer and the users unfolds. Every game has an infrastructure that supports it, even if it as minimal as conversations that have previously taken place between different groups that play the game. But the infrastructure of digital games played online allows for granular analysis of game events and immediate modification of the game rules. These might impede the reproduction of the game, for instance if too many rule changes alienate players, but the capacity to make these changes is something new and interesting.

There are also differences at the level of the virtual structure of the game: the latent order through which events unfold, driven by the rules of the game, but producing patterns which inevitably exceed what could be predicted from those rules alone. The complexity of digital games vastly exceeds that of analogue games, perhaps in a way which renders it impossible to render them formalistically in terms of branching probabilities. This isn’t always the case, particularly with older games which aren’t multiplayer. For instance I find it difficult to understand how something like this speed run of Super Mario 3 is possible unless there is, in principle, a ‘correct’ move to make at every point in the process, even if it doesn’t involve adherence to the formal rules of the game:

But more complex games, particularly those in which many players compete online, would seem to be a different phenomenon altogether. However is the challenge this poses ontological or epistemology? Is there no underlying (virtual) structure or is it simply too complex to be mapped? I find the former claim untenable because in principle it seems obvious to me that any particular instance of the game could be analysed, with sufficient data, in order to explain why it unfolded in the way they it did. This presupposes a structure in relation to which those outcomes become explicable. In which case, the problem is epistemic and perhaps suggests that other methods, perhaps data scientific ones, might be necessary. With enough  data could the contours of such a virtual game structure be fallibly traced out, even if it resists analysis through other means?

One of the key points of disagreement between Object-Orientated Philosophy (OOP) and Critical Realism (CR) rests on the epistemic status of the object. While OOP and CR are in agreement that, as Harman puts it on pg 2-3 of his Immaterialism, objects should be treated as a “surplus exceeding its relations, quality, and actions”, CR takes a more optimistic view of the epistemological challenge posed by this surplus.

The key issue concerns the potentiality of objects. From Harman’s perspective, CR’s concern for casual power still constitutes a form of reduction. It’s an improvement on reducing objects to their effects. But, as he writes on pg 52, it’s still reducing objects to their potential effects:

Yet this purported advance still assumes that at the end of the day, nothing matters aside from what sort of impact a thing has or might eventually have on its surrounding. This risks obscuring our view of objects in a number of ways, which not only poses an ontological problem, but has methodological consequences as well.

I maintain that some of these methodological consequences can be avoided through a sophisticated account of how those casual powers are activated. In this way, the category of ‘effects an object might have in future’ always involves reference to a variable context, raising issues of how the features of an object and the features of a context combine to produce effects.

I’m nonetheless taking his challenge seriously. I’d earlier seen his account of objects as unduly pessimistic on an epistemic level: underestimating our capacity for knowledge of the parts, their relational organisation, their ensuing qualities, their ensuing powers and how these might be expressed in different contexts. But I increasingly realise that the CR formulation I’m so used to using, ‘properties and powers’, reflects a much clearer understanding of the properties than the powers. I think the former is often subordinated to the latter, such that properties are those features of objects we invoke in order to explain their causal powers. There’s a depth to the ‘surplus’ of objects which I realise I hadn’t previously grasped, even if I’m still not entirely certain about Harman’s account of it.

Reading Immaterialism by Graham Harman, I’m struck by the overlap between his account of ‘duomining’ and Margaret Archer’s critique of conflation. As he writes on pg 27-28,

“If we reduce an object downward to its pieces, we cannot explain emergence; if we reduce it upwards to its effects, we cannot explain change.”

While Archer’s argument is made in terms of the structure/agency problem, it can easily be recast in terms of structure alone. If we reduce social structure to the individuals who comprise it (alongside other material elements, which Archer is less sensitive to), we cannot explain how certain arrangements of people and things assume characteristics which the same ‘pieces’ lack in other arrangements (upwards conflation). If we focus solely on the effects of social structure, identifying how it constrains and enables individuals, we cannot explain how that structure might itself undergo change because it is the only causal power we admit (downwards conflation).

However this is only an overlap, as Archer and Harman’s arguments about modes of reduction are made for different reasons and they later diverge. Archer is concerned with the analytical temptations which inhere in the structure/agency problem that social science invariably confronts, even when it attempts to suppress it through various means. In contrast, Harman is concerned with ‘undermining’ and ‘overmining’ as two fundamental forms of knowledge which cannot be avoided: “what a thing is made of” (undermining) and “what a thing does” (overmining) (pg 28). Archer is concerned with a denial of relationality, as well as its temporal unfolding, with downwards and upwards conflation charged with suppressing the interplay over time between the different kinds of entities which make up the social word. Harman is concerned with the denial of objects as such, reducing their reality to the parts and their effects, losing a grip on the entity which is composed of these parts and capable of these effects without being reducible to either.

Both approaches explore a tension between the analytical and the ontological. Harman’s notion of overmining, which I found much less straightforward to grasp than his notion of undermining, identifies its roots in the tendency to treat objects as mysterious and unknowable in themselves. An ontological claim licenses an analytical one, as the analyst focuses upon the effects of objects as something epistemically tractable in contrast to the objects themselves. Even if they continue to recognise the reality of the object, it is a notional recognition which doesn’t enter into their analysis. This is something Harman addresses explicitly on pg 28:

After all, any claim that a thing is convertible into knowledge cannot account for the obvious and permanent difference between a thing and knowledge of it: if we had perfect mathematised knowledge of a dog, this knowledge would still not be a dog. It will be said that this is a “straw man” argument, since philosophers are obviously aware that knowledge is different from its object. Yet it is not a question of whether philosophers are personally “aware” of this, but of whether their philosophies sufficiently account for it.”

To which we might add: ‘and whether they incline social scientists drawing on their ideas to factor this awareness into their explanations’. This interface between the ontological and the analytical one is one that has long fascinated me: how does theory constrain and enable the explanations which enter into social inquiry? What other forms of ‘conceptual slippage’ can we identify as ontological claims contribute to social analysis?

In Immaterialism, Graham Harman offers a provocative critique of Latour’s social theory, praising Actor-Network Theory as “the most important philosophical method to emerge since phenomenology in 1900” (pg. 1) while also regarding its account of objects as philosophically deficient. While he accepts the ANT thesis that objects mediate human relations, something which chips away at the pervasive anthropocentrism of social theory, it nonetheless reinforces a human-centric world view in a subtle and interesting way. From pg 6:

To say that objects mediate relations is to make the crucial point that unlike herds of animals, human society is massively stabilized by such nonhuman objects as brick walls, barbed wire, wedding rings, ranks, titles, coins, clothing, tattoos, medallions, and diplomas (Latour 1996). What this still misses is that the vast majority of relations in the universe do not involve human beings, those obscure inhabitants of an average-sized planet near a middling sun, one of 100 billion stars near the fringe of an undistinguished galaxy among at least 100 billion others.

The commitment of ANT to defining actors through actions, itself understood in terms of effects on other actors, “allows objects no surplus of reality beyond whatever they modify, transform, perturb, or create” (pg. 10). Without this surplus, Harman questions how it can be possible for them to change. It is only when we recognise “an object is more than its components” and “less than its current actions” that its capacity to do otherwise becomes conceivable (pg. 11). Exactly what the surplus is, as well as how it underwrites this potentiality, might vary. As Harman notes of himself on pg 11:

The author Harman who currently types these words in the University of Florida Library while wearing a black sweater is far too specific to be the Harman who will leave Florida next Sunday and can remove the sweater whenever he pleases.

These features of the object which aren’t exhausted in its present actions are what account for its future capacities. If my specificity is exhausted in my writing of this blog post, it becomes mysterious how I cooked dinner or planned a trip earlier. There are the facts of these other actions but myself, as a unifying nexus in which these properties and powers converge, becomes emptied out into a frantic existence of constant process.

I couldn’t agree more with Harman’s claim that every object should be considered “as a surplus exceeding its relations, qualities, and actions” (pg. 3-4). Where I part company is with his epistemic pessimism. From pg 17-18:

And whereas naive realism thinks that reality exists outside the mind and we can know it, object-orientated realism holds that reality exists outside the mind and we cannot know it. Therefore, we gain access to it only by indirect, allusive, or vicarious means. Nor does reality exist only “outside the mind,” as if humans were the only entities with an outside. Instead, reality exists as a surplus even beyond the causal interactions of dust and raindrops, never fully expressed in the world of inanimate relations any more than in the human sphere.

This leaves me preoccupied by variance. My issue is not with the claim itself, as much as with it being framed in a way which makes it hard to unpack how this might vary between objects and contexts. How much surplus remains when we consider a given action? It depends on the action, the actor and the context. I don’t for a second believe this can be reduced to calculus but I nonetheless maintain there are differences of degree. I’m not convinced that the surplus of objects is quite as epistemically intractable as Harman makes it sound.

Over the next few years, I’ll be working on a collaborative project on trans- and post-humanism, building on the Centre for Social Ontology’s previous Social Morphogenesis series. My main contribution to this will be co-editing a volume, Strangers in a Familiar Land, with Doug Porpora and Colin Wight as well as exploring digital technology and what it means for human agency. 

This project is giving me a reason to read more widely than I have in a while, with a particular focus likely to be Andy Clark’s work in the philosophy of mind, speculative realism and continental philosophy of technology. There’s a lot of value to be found in the latter but one persistent point which frustrates me is what appears, to me at least, to be a fundamental confusion about the category of the human. This issue became clear to me when reading a thought provoking blog on Social Ecologies

Why must everything revolve back to a human relation – for-us? This human exceptionalism resides throughout the gamut of philosophical reflection from Plato to Derrida. One will ask as Bradley does: Why, in other words, can something that believes itself to be a critique of anthropologism still be seen as essentially anthropocentric? Can we step outside this temple of man and create a non-anthropocentric discourse that doesn’t find itself reduced to this human relation by some backdoor slippage of conceptuality? Are we condemned to remain human? What or who is this creature that for so long has created a utopian world against its inhuman core? If we were to be released from this prison of the human who or what would emerge? How alien and alienated am I to what I am? How monstrous am I?

https://socialecologies.wordpress.com/2017/07/17/we-were-never-human/

Unless I’ve entirely misunderstood a literature I’m still relatively new to, ‘technicity’ is an abstraction from material culture. It’s an abstraction which serves a purpose, allowing us to isolate the technical so as to inquire into its character, but the empirical referents of the term are technological artefacts i.e. a domain of material culture. In which case, it should not surprise us that the human constantly resurfaces, nor should we impure this tendency to a mysterious stickiness which ‘humanism’ as a doctrine possesses.

Material culture will always imply questions of the human because we are talking about artefacts built by, for, with and against human beings in social contexts which are similarly human saturated. The value in considering ‘technicity’ lies in opening out a space in which we can inquire into the emergent characteristics of the technical as a domain of material culture, considering the logic that guides it and how it can act back upon creators and the social contexts in which they create. But explaining material culture necessarily entails human-centred accounts, even if these have tended to problematically exclude or marginalise non-human elements. 

To suggest otherwise strikes me as straight-forward mystification, circumscribing large domains of social life as outside analysis, rather than offering a meaningful competing ‘inhuman’ explanation. It seems like a clear example of what Andrew Sayer calls a ‘PoMo flip’: responding to a problematic dichotomy by inverting it, rather than seeking to transcend the conceptual structure that creates the problem. In this case responding to an exclusion of non-human elements by seeking to exclude the human elements instead.