What is a game? A standard definition is “a form of competitive activity or sport played according to rules” and this has been the working conception when I’ve encountered theoretical engagements with the notion of a game. But a recent symposium on eSports left me reflecting on how much more complex the ontology of games is when we consider contemporary video games, raising the question of whether digital games, particularly those played online, are something entirely different from their analogue predecessors.

Consider how a game like poker has developed over time. This family of card games has a contested history, with many potential predecessors being claimed. It has also has many variants, with rules that are stabilised through a range of artefacts, from ‘how to’ guides through to cultural representations and rule books in tournaments. As much as these artefacts exercise a normative influence over how poker is played, it’s predominant mode of transmission is interpersonal, with changes in the game liable to be piecemeal and taking place over long periods of time. In contrast, the rules of online digital games can be changed at a moment’s notice, with these being an important vector through which the relationship between the developer and the users unfolds. Every game has an infrastructure that supports it, even if it as minimal as conversations that have previously taken place between different groups that play the game. But the infrastructure of digital games played online allows for granular analysis of game events and immediate modification of the game rules. These might impede the reproduction of the game, for instance if too many rule changes alienate players, but the capacity to make these changes is something new and interesting.

There are also differences at the level of the virtual structure of the game: the latent order through which events unfold, driven by the rules of the game, but producing patterns which inevitably exceed what could be predicted from those rules alone. The complexity of digital games vastly exceeds that of analogue games, perhaps in a way which renders it impossible to render them formalistically in terms of branching probabilities. This isn’t always the case, particularly with older games which aren’t multiplayer. For instance I find it difficult to understand how something like this speed run of Super Mario 3 is possible unless there is, in principle, a ‘correct’ move to make at every point in the process, even if it doesn’t involve adherence to the formal rules of the game:

But more complex games, particularly those in which many players compete online, would seem to be a different phenomenon altogether. However is the challenge this poses ontological or epistemology? Is there no underlying (virtual) structure or is it simply too complex to be mapped? I find the former claim untenable because in principle it seems obvious to me that any particular instance of the game could be analysed, with sufficient data, in order to explain why it unfolded in the way they it did. This presupposes a structure in relation to which those outcomes become explicable. In which case, the problem is epistemic and perhaps suggests that other methods, perhaps data scientific ones, might be necessary. With enough  data could the contours of such a virtual game structure be fallibly traced out, even if it resists analysis through other means?

On pg 102 of Jonathan Taplin’s Move Fast and Break Things, he highlights email exchanges between YouTube’s founders, released in a court case, which suggest the invocation of ‘user generated content’ might be a matter of branding rather than a meaningful growth strategy for social media platforms:

In another email exchange from 2005, when full-length movies were being posted on YouTube, Steve Chen, a cofounder of the company, wrote to his colleagues Hurley and Jawed Karim, “Steal it!,” and Chad Hurley responded: “Hmm, steal the movies?” Steve Chen replied: “We have to keep in mind that we need to attract traffic. How much traffic will we get from personal videos? Remember, the only reason why our traffic surged was due to a video of this type…. viral videos will tend to be THOSE type of videos.”

Much critical literature has focused on how social media platforms ossify existing hierarchies and establish new ones. It is too easy to see this as an unexpected consequence of a new social infrastructure, as opposed to an outcome which was knowingly designed in from the start.

An interesting snippet on pg 164 of Jonathan Taplin’s Move Fast and Break Things suggests a metric of content density which could be extremely interesting to explore:

Digiday looked at the race for what some are calling peak content. What it found was that in 2010 the New York Times, with 1,100 people employed in the newsroom, created 350 pieces of original content per day and attracted 17.4 million page views per day. By contrast, the Huffington Post, with 532 people employed, posted 1,200 pieces of content per day (most of it created by third-party sites) and 400 blog entries (mostly unpaid), receiving 43.4 million page views per day. One can understand why the future of original journalism is threatened.

This quantitative metric raises questions which invite qualitative analysis e.g. to what extent does an increase in content density (less staff producing more content) correlate with content being shorter, derivative and shallow? Are there cultural producers where this isn’t the case? What are the conditions which counteract this seemingly inevitable consequence of asking people to produce more with less?

One of the most pressing issues we confront when analysing the digital economy is a pronounced tendency towards oligopoly which makes a lie of an earlier generation’s utopian embrace of the Internet as a sphere of free competition and a driver of disintermediation. There are important lessons we can learn from platform studies about the reasons for this, concerning the architecture of platforms and the logic of their growth. But it’s important we don’t lose sight of how these dynamics are reliant upon existing legal and economic processes which predate the ‘digital revolution’. As Jonathan Taplin points out in Move Fast and Break Things, their competitive advantage was reliant upon a specific regulatory environment that was far from inevitable. From pg 79:

The economist Dean Baker has estimated that Amazon’s tax-free status amounted to a $ 20 billion tax savings to Bezos’s business. Baker notes, “In a state like New York, where combined state and local sales taxes average over 8.0 percent, Amazon could charge a price that was 1.0 percent below its brick and mortar competition, and still have an additional profit of 7 percent on everything it sold. That is a huge deal in an industry where profits are often just 2–3 percent of revenue.” Bezos, eager to preserve this subsidy, went to work in Washington, DC, and got Republican congressman Christopher Cox and Democratic senator Ron Wyden to author the Internet Tax Freedom Act. The bill passed and was signed by President Bill Clinton on October 21, 1998. Although not barring states from imposing sales taxes on ecommerce, it does prevent any government body from imposing Internet-specific taxes.

This is only one example. An adequate understanding of the digital economy requires that we identify the regulatory environments within which each category of tech firm operates and how this has contributed to their thriving or  struggling. When we combine this institutional analysis with platform dynamics, we can begin to account for the level of market concentration which Taplin summarises on pg 119-120:

In antitrust law, an HHI score —according to the Herfindahl-Hirschman Index, a commonly accepted measure of market concentration —is calculated by squaring the market share of each firm competing in a given market and then adding the resulting numbers. The antitrust agencies generally consider markets in which the HHI is between 1,500 and 2,500 to be moderately concentrated; markets in which the HHI is in excess of 2,500 are highly concentrated. The HHI in the Internet search market is 7,402. Off the charts.

He goes on to argue on pg 121-122 that this situation helps generate a cash glut with serious systemic consequences:

The problem is that the enormous productivity of these companies, coupled with their oligopolistic pricing, generates a huge and growing surplus of cash that goes beyond the capacity of the economy to absorb through the normal channels of consumption and investment. This is why Apple has $ 150 billion in cash on its balance sheet and Google has $ 75 billion. These enterprises cannot find sufficient opportunities to reinvest their cash because there is already overcapacity in many areas and because they are so productive that they are not creating new jobs and finding new consumers who might buy their products. As former treasury secretary Lawrence Summers has put it, “Lack of demand creates lack of supply.” Instead of making investments that could create new jobs, firms are now using their cash to buy back stock, which only increases economic inequality.

In other words: the inequality which digital capitalism generates is only contingently a function of technology.

Over the next few years, I’ll be working on a collaborative project on trans- and post-humanism, building on the Centre for Social Ontology’s previous Social Morphogenesis series. My main contribution to this will be co-editing a volume, Strangers in a Familiar Land, with Doug Porpora and Colin Wight as well as exploring digital technology and what it means for human agency. 

This project is giving me a reason to read more widely than I have in a while, with a particular focus likely to be Andy Clark’s work in the philosophy of mind, speculative realism and continental philosophy of technology. There’s a lot of value to be found in the latter but one persistent point which frustrates me is what appears, to me at least, to be a fundamental confusion about the category of the human. This issue became clear to me when reading a thought provoking blog on Social Ecologies

Why must everything revolve back to a human relation – for-us? This human exceptionalism resides throughout the gamut of philosophical reflection from Plato to Derrida. One will ask as Bradley does: Why, in other words, can something that believes itself to be a critique of anthropologism still be seen as essentially anthropocentric? Can we step outside this temple of man and create a non-anthropocentric discourse that doesn’t find itself reduced to this human relation by some backdoor slippage of conceptuality? Are we condemned to remain human? What or who is this creature that for so long has created a utopian world against its inhuman core? If we were to be released from this prison of the human who or what would emerge? How alien and alienated am I to what I am? How monstrous am I?

https://socialecologies.wordpress.com/2017/07/17/we-were-never-human/

Unless I’ve entirely misunderstood a literature I’m still relatively new to, ‘technicity’ is an abstraction from material culture. It’s an abstraction which serves a purpose, allowing us to isolate the technical so as to inquire into its character, but the empirical referents of the term are technological artefacts i.e. a domain of material culture. In which case, it should not surprise us that the human constantly resurfaces, nor should we impure this tendency to a mysterious stickiness which ‘humanism’ as a doctrine possesses.

Material culture will always imply questions of the human because we are talking about artefacts built by, for, with and against human beings in social contexts which are similarly human saturated. The value in considering ‘technicity’ lies in opening out a space in which we can inquire into the emergent characteristics of the technical as a domain of material culture, considering the logic that guides it and how it can act back upon creators and the social contexts in which they create. But explaining material culture necessarily entails human-centred accounts, even if these have tended to problematically exclude or marginalise non-human elements. 

To suggest otherwise strikes me as straight-forward mystification, circumscribing large domains of social life as outside analysis, rather than offering a meaningful competing ‘inhuman’ explanation. It seems like a clear example of what Andrew Sayer calls a ‘PoMo flip’: responding to a problematic dichotomy by inverting it, rather than seeking to transcend the conceptual structure that creates the problem. In this case responding to an exclusion of non-human elements by seeking to exclude the human elements instead.

In the last few weeks, I’ve found myself using the term ‘playbook’ in a number of contexts. It’s typically defined as “a book containing a sports team’s strategies and plays, especially in American football” but I’m not quite sure where I picked up the phrase from as someone who hasn’t had much interest in sport for a long time. 

It’s been on my mind since reading Merchants of Doubt, an incisive historical exploration of a dangerous corporate tendency towards the deliberate cultivation of doubt in relation to pressing issues such as nuclear winter, acid rain, DDT and climate change. As I suggested in a post a couple of weeks ago, we can talk meaningfully of a ‘playbook for merchandising doubt’. In fact something akin to this was once explicitly published, as the authors of Merchants of Doubt summarise on pg 144-145:

Bad Science: A Resource Book was a how-to handbook for fact fighters. It contained over two hundred pages of snappy quotes and reprinted editorials, articles, and op-ed pieces that challenged the authority and integrity of science, building to a crescendo in the attack on the EPA’s work on secondhand smoke. It also included a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite. 42 Bad Science was a virtual self-help book for regulated industries, and it began with a set of emphatic sound-bite-sized “MESSAGES”:

1. Too often science is manipulated to fulfill a political agenda.

 2. Government agencies … betray the public trust by violating principles of good science in a desire to achieve a political goal. 

3. No agency is more guilty of adjusting science to support preconceived public policy prescriptions than the Environmental Protection Agency. 

4. Public policy decisions that are based on bad science impose enormous economic costs on all aspects of society. 

5. Like many studies before it, EPA’s recent report concerning environmental tobacco smoke allows political objectives to guide scientific research. 

6. Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.

Has anyone encountered comparable documents to this? The scale and organisation of doubt merchandising surely means they have been produced. But perhaps there’s a broader category to be explored here: the explicit articulation of surreptitious tactics

It highlights how coordination presupposes communication, suggesting that even the most duplicitous strategies of the powerful will tend to leave a paper trail. Where we see what appears to be organisation, even if the actors involved deny this, do we have reason to believe there may somewhere exist a ‘playbook’ or something akin to it? I would  tentatively define this as the formal articulation of a tactical repertoire that can be drawn upon in informal contests, even if the definition of these elements may be obscured behind a thick veneer of technocratic distance. By ‘informal contests’ I mean those where rules are not defined or a contest actually declared. The existence of a playbook reveals how advantages in organisational capacity might translate to a practical advantage in competition.

I’d be intrigued to know if these ruminations resonate with anyone, particularly those who might be able to furnish further examples 

The self as painting: we become who we are through repetition and representation. Encumbered only by our imagination and the culture in which we find ourselves, we craft ourselves through iterated projects of self-representation. We might find the materials available to us limiting, in which case we might seek out a more diverse palette of cultural ideas through which to express that which we are and wish to be. We might also seek to refine our technique, extending the range of our potential selves by expanding our capacities to represent them. But the process is fundamentally repetitive. We begin within constraints but once we start painting, it’s up to us what we do. The freedom exercised through this is one of redescription, in Richard Rorty’s sense, something which Roy Bhaskar once critiqued as relying on a ‘free-wheeling’ conception of freedom: it doesn’t hook on to the world, to the definitive ways in which things are at any given point in time, with all the constraints and limitations which this entails. 

Its appeal rests on the prospect of everlasting freedom. We can dispense with any one painting once we grow dissatisfied, throwing it away to restart in pursuit of ever richer and more vivid representations of our self. But there is an element of fantasy in this, refining our representation of self potentially at the cost of losing touch with the reality of who we are and where we are at any given moment. To craft the self as painting represents a private project of self-creation. It approaches the challenges of existence in an aesthetic register, one which cuts us off from our selves and from others in an ever-so subtle way, while holding out the (always retreating) promise of endless freedom in inner life, whatever the world out there holds for us and what we care about. 

The self as sculpting: through a sustained engagement with the material we find in our selves and our lives, we gradually produce the person we aim to be through our crafting of self. The process is subtractive, rather than additive. We select, refine and remove in a way that is path-dependent, often finding unexpected limitations which follow from the whole sequence of past choices we have made. The further we go in this process, the less room for manoeuvre we have because our form becomes progressively more concrete with time. To become who we are depends on what was latent with us, but how this comes to take the form it does depends on the world we have found ourselves in and how we have chose to make our way through it. 

We shape the clay but we do not choose it and our understanding of the range of possibilities latent within it will always be constrained by circumstance and experience. When the promise of the protean self is ubiquitous, tempting us with the idea that the only limit on who we can be is our imagination, the limitations of the clay can seem suffocating. But there is a freedom within these constraints. A profound, challenging and subtle freedom which refuses the reduction of existence to aesthetics. 

One of the most interesting issues raised by the rise of data science in party politics is how to untangle corporate rhetoric from social reality. I have much time for the argument that we risk taking the claims of a company like Cambridge Analytica too seriously, accepting at face value what are simply marketing exercises. But the parallel risk is that we fail to take them seriously enough, dismissing important changes in how elections are fought as marketing hype propounded by digital charlatans.

Perhaps we need to focus more on the data scientists themselves. As much as there is something of the Bond villain about Alexander Nix, CEO of Cambridge Analytica, it’s important that we don’t become preoccupied with corporate leaders. Who are the rank-and-file data scientists working on campaigns? What motivates them? How do they conceive of the work they do? There were interesting hints about this in the recent book Shattered, looking at Hilary Clinton’s failed election campaign. Much as was the case with Jeb Bush’s near entirely stalled campaign, there had been much investment in data analytics, with buy-in right from the top of the campaign. From pg 228-229:

These young data warriors, most of whom had grown up in politics during the Obama era, behaved as though the Democratic Party had come up with an inviolable formula for winning presidential elections. It started with the “blue wall”—eighteen states, plus the District of Columbia, that had voted for the Democratic presidential nominee in every election since 1992. They accounted for 242 of the 270 electoral votes needed to win the presidency. From there, you expanded the playing field of battleground states to provide as many “paths” as possible to get the remaining 28 electoral votes. Adding to their perceived advantage, Democrats believed they’d demonstrated in Obama’s two elections that they were much more sophisticated in bringing data to bear to get their voters to the polls. For all the talk of models and algorithms, the basic thrust of campaign analytics was pretty straightforward when it came to figuring out how to move voters to the polls. The data team would collect as much information as possible about potential voters, including age, race, ethnicity, voting history, and magazine subscriptions, among other things. Each person was given a score, ranging from zero to one hundred, in each of three categories: probability of voting, probability of voting for Hillary, and probability, if they were undecided, that they could be persuaded to vote for her. These scores determined which voters got contacted by the campaign and in which manner—a television spot, an ad on their favorite website, a knock on their door, or a piece of direct mail. “It’s a grayscale,” said a campaign aide familiar with the operation. “You start with the people who are the best targets and go down until you run out of resources.”

Understanding these ‘data warriors’ and the data practices they engage in is crucial to understanding how data science  is changing party politics. Perhaps it’s even more important than understanding high profile consultancies and the presentations of their corporate leaders.

The notion of relational authoriality, which consistency demands I acknowledge emerged in conversations with Jana Bacevic, conveys a relational realist perspective on the question of authorship. It rejects the notion of the liberal individual as the origin of a text while continuing to insist that there is a definite causal story to be told about the emergence of any text, encompassing individuals and the relations between them. Relational authoriality stresses how creative production happens through interaction, direct or mediated, between individuals who care about what they discuss. People debate, discuss and digress about things that matter to them. It’s this concern to enter into dialogue, sometimes with the parties involved changing as a result of the process, which provides the relational underpinning to creative production. It might be that a particular individual takes forward this raw material, running with it and placing their mark on it in a way which leads to it being recognised as theirs. But this simple wouldn’t be possible without these prior networks, acting as the creative ecology within which individual authorship becomes feasible. Every completed act of authorship has its own history of emergence and accurate accounts of it will lead back to individuals, interactions and relations.

I was led to think back to this line of thought when reading Shattered: Inside Hilary Clinton’s Doomed Campaign. As is often the case, speeches and speech writing figure prominently in the book. I’ve read a lot of campaign books over the years and I’ve always been gripped by these details. In part this is because political speeches are such a crucial part of the politician’s craft, with their (perceived) success or failure being integral to the fluctuating fortunes of political careers. This isn’t simply an American phenomenon. Consider the acclaim which greeted David Cameron’s 2005 conference speech, delivered without a lectern or notes, widely seen to have tipped the leadership contest in his favour. We can see a parallel in Ed Milliband’s first conference speech as Labour leader. Much of the increasing ‘plausibility’ of Corbyn as a political leader, at least amongst the commentariat, rests on the increasingly polished way in which he delivers speeches.

Why does this matter so much? There are many reasons why accomplished delivery are valued in an age of media-saturated politics. But I wonder if a fetish of delivery reflects a denial of relational authoriality. In reality, all who have considered it must surely recognise that politicians do not straight-forwardly write their own speeches, allowing them to meaningfully claim ownership of them in an individualistic sense. These are team efforts, at best produced through careful collaboration between committed partners and at worst produced mechanically through committees. We can see the character of politicians, as well as the nature of the organisations they inhabit, reflected in how they approach these challenges. Contrast the dialogical collaboration between Obama and trusted aides with the byzantine, sometimes conflicting, structures which Clinton often established for speech writing. But these are subtle judgements, pointing to relational authoriality rather than individual authorship, which sit uneasily within the individualistic frame of ‘political leadership’. We fetishise delivery of speeches, as well as the perceived strength of the individuals who delivery them, as the spiralling complex of governance ever more outstrips the capacities of the ‘strong leaders’ we praise.

While many see the term ‘curation’ as modish and vague, I see it as an important concept to make sense of how we can orientate ourselves within a changing cultural landscape. However I can sympathise with the thrust of these objections, in so far as they take issue with a sense of curation tied in with the worship of the new. Such a use of the term is possibly dominant, framing the curatorial imperative (selecting from available variety through filtering, commentary and evaluation) as a specialisation which emerges to cope with the late modern world. If we frame curation in this way, we miss out on the opportunity to explore how it has changed over time. See for example Nick Couldry’s Media, Self, World loc 1732:

Some literary cultures have been distinguished by the richness of their practices of commentary: the Jewish tradition of cabbala is frequently cited, but the ancient world’s general scarcity of textual objects meant that written manuscripts often reached people with the commentary of previous readers’ (so-called ‘scholiasts’) embedded within them, a tradition which reaches us now via the comments written in medieval versions of Greek texts.
Now we are entering an age of commentary for the opposite reason: because of the almost infinite proliferation of things to read and look at, we need to send signals to help each other select from the flux. At the same time, and for related reasons, our ability to send comments and signals has been massively extended by digital media: we take it for granted that by emailing or uploading a link we can point at something interesting we have just read and so alert someone on the other side of the world. The scope of commentary as a practice has been massively enlarged.

It is important that we can address problems and opportunities created by specific technologies without circumscribing our accounts in a way that limits them to these technologies. If we do so, we fail to recognise the continuities and we are inevitably left with anaemic conceptions of the human and the social which tend to be exhausted by the social-technical. From loc 1534 of Couldry’s book:

From searching, other practices quickly develop: practices of exchanging information by forwarding weblinks to family, friends or work colleagues, warehousing sites that collect recommendations from users so other users can narrow down their search practice (Digg, etc.), and tools for pre-ordered searches (RSS feeds and other alerts). These various search-enabling practices are increasingly prominent in everyday life as people seek to optimize their access to the vastly expanded flow of potentially relevant information. Their dispersed agency (anyone can forward a link or signal that they ‘like’ a post) contrasts with earlier centuries’ ways of disseminating interesting material: for example, the ancient and medieval world’s florilegia produced by groups of scholars, often in monasteries, who collected interesting quotes from otherwise obscure books into new volumes. Now not only do individuals (from their computers or phones, wherever they are) make the recommendations, but system interfaces, such as Digg and reddit, enable them to recommend cumulatively. Some commentators hope that ‘collaborative filtering’ and other collective forms of information sorting can challenge the dominance of Google and even create new forms of social bond.

How do we ensure we recognise these contrasts? How can we explore them in a way which allows us to productively theorise continuities and differences? There’s a fascinating meta-theoretical challenge here which I’d like to engage with seriously in future.

There’s an intriguing argument in The Mediated Construction of Social Reality, by Nick Couldry and Andreas Hepp, concerning our dependence upon digital media and how we respond to its failure. From loc 5527:

We feel the costs viscerally: when ‘our’ media break down –we lose internet connection, our password stops working, we are unable to download the latest version of software required by the device or function we want to use –it is as if the social infrastructure were itself, in some respect, breaking down: recursivity has been interrupted, ontological security becomes threatened.

I take their point to be that our reliance upon digital media isn’t simply about specific purposes. For digital media to fail does not frustrate us because it impedes a particular purpose. In an important sense, our purposiveness as such, has come to rely upon digital media. For this reason, there is a latent trauma inherent in its breakdown. We experience its failure in terms of a impeded capacity to act within the world, as opposed to simply frustrating specific actions.

The argument is underdeveloped, as can be seen by the “in some respect” clause within it. It’s nonetheless an important and provocative one. It left me wondering if anyone has done qualitative research about experiences of wifi breaking down in terms of the affective fallout from such a failure? My experience of this has tended to be one of whole categories of action being foreclosed when this happens, as in a real sense I lose the ability to proceed with my work, rather  than it simply being a contingent impediment to particular tasks. I imagine there’s a great deal of variability in how people respond to such a situation but I nonetheless think Couldry and Hepp are pointing towards something very interesting.

An exercise in free-writing, undertaken at a writing workshop at the Becoming Academic conference at the University of Sussex.

I write to eliminate the clutter in my head, the accumulated debris which emerges within me as I make my way through the world, trying to understand my experiences as I go. If I am free to write, I am free to be within the world and my experience feels most full and most thick when I am externalising my internal reactions to the world. What C Wright mills called ‘the feel of an idea’ preoccupies me and my orientation to the world feels changed in those times when I seize upon that feeling, run with it it and make something new ‘out there’ from a reaction I had ‘in here’ to the world. But what can be difficult is when I can’t run with that feeling, when nascent ideas bubble up inside of me but circumstances preclude my running with them. Contingencies intervene and prevent my exploration of these things I feel moved to explore. If I don’t write, I feel in partial motion, stuck in the early stages of a range I cannot complete. If I can’t write, I feel somehow incomplete, as if my capacity to react to the world is subtly mutilated. I write to eliminate the clutter in my head and without writing I am inundated by mess.

I wonder if there is something performative about my writing, as if I bring myself into being through the process of doing it. I wonder why I feel so compelled to share my writing, as if it somehow isn’t real or can’t become real unless it is out there in the world. It’s a repeated exercise, conducted thousands of times, which has left me feeling extremely comfortable with the prospect of sharing my writing. But I’m still not entirely sure why I do it and at times it feels like a compulsion.

That’s the question I’ve been asking myself when reading through two books by Nick Couldry in which he develops a materialist phenomenological approach to understanding social reality. The first is The Mediated Construction of Social Reality (with Andreas Hepp) and the second is Media, Society, World. It’s in the latter book that he considers the representational power of media. From loc 683:

Media institutions, indeed all media producers, make representations: they re-present worlds (possible, imaginary, desirable, actual). Media make truth claims, explicit or implicit: the gaps and repetitions in media representations, if systematic enough, can distort people’s sense of what there is to see in the social and political domains.

There is a political economy underpinning this, in terms of the capacity to make such representations and the gains accruing from this capacity. The common reference points which accumulate as a consequence serve a broader economic purpose. From loc 701:

However, if basic consumer demand –for fashion, music, sport –is to be sustained at all, it requires ‘the media’ to provide common reference points towards which we turn to see what’s going on, what’s cool.

The interests and influence in play here have been crucial to the unfolding of late modernity. Media has been a site through which power has consolidated. What we are seeing with ‘post-truth’ is a deconsolidatiob of this apparatus, taking place at a number of different levels. From loc 886:

Representations matter. Representations are a material site for the exercise of, and struggle over, power. Put most simply, our sense of ‘what there is’ is always the result of social and political struggle, always a site where power has been at work. 150 But fully grasping this in relation to media is difficult: because the role of media institutions is to tell us ‘what there is’ –or at least what there is that is ‘new’ –media’s work involves covering over its daily entanglement in that site of power. Media aim to focus populations’ attention in a particular direction, on common sites of social and political knowledge. Media institutions’ embedding as the central focus of modern societies is the result of a history of institutional struggle that is becoming more, not less, intense in the digital media era. It is essential to deconstruct the apparently natural media ‘order’ of contemporary societies.

In the 30+ talks I have done about social media in the last year, I have discussed many things. But the one theme that has been most prominent is the extrinsic, rather than intrinsic, complexity of the subject matter. There is nothing inherently challenging about how to use social media. Any practical or technical difficulties are well within the realm of what has become habitual for most within late modernity. What creates the challenge is negotiating the novelty of its enablements and constraints within a particular context.

However it is this novelty which also makes it difficult to exercise our reflexivity in the way we would about any comparable matter. This novelty gives rise to a species of what Jacob Silverman describes as ‘internet exceptionalism’:

What we call the Internet—and what web writers so lazily draw on for their work—is less a hive mind or a throng or a gathering place and more a personalized set of online maneuvers guided by algorithmic recommendations. When we look at our browser windows, we see our own particular interests, social networks, and purchasing histories scrambled up to stare back at us. But because we haven’t found a shared discourse to talk about this complex arrangement of competing influences and relationships, we reach for a term to contain it all. Enter “the Internet.”

The Internet is a linguistic trope but also an ideology and even a business plan. If your job is to create content out of (mostly) nothing, then you can always turn to something/someone that “the Internet” is mad or excited about. And you don’t have to worry about alienating readers because “the Internet” is so general, so vast and all-encompassing, that it always has room. This form of writing is widely adaptable. Now it’s common to see stories where “Facebook” or “Twitter” stands in for the Internet, offering approval or judgment on the latest viral schlock. Choose your (anec)data carefully, and Twitter can tell any story you want.

Much as “the Internet” gives us “a rhetorical life raft to hang onto” when discussing a subject that is vastly overhyped and invested with all manner of hopes and fears, so too does “social media” become a semantic crutch when making sense of the complex changes being brought about by digital communications within a particular institutional sphere. It’s similarly “easy, a convenient reference point” through which we gloss a complex set of changes in which technological possibilities are only one causal factor. By exceptionalising social media in this way, we “fail to relate this communication system, and everything that happens through it, to the society around us”.

This tendency seems even more pronounced when we talk about something as specific as the academy. The more we talk about “social media” as something which all academics should (or shouldn’t do) the more we obscure the changes it entails for academic labour and the organisations which academics work within. My ambition as someone who has written a book called Social Media for Academics? To get academics to stop talking about social media.

A few weeks ago, I found myself on a late night train to Manchester from London. After a long day, I was longing to arrive home, a prospect that seemed imminent as the train approached Stockport. Then it stopped. Eventually, we were told that there was someone on the tracks ahead and that the police were on the scene. We waited. After another ten minutes, we were told that the police were still trying to apprehend the person on the tracks. I checked Twitter and saw this incident had been unfolding for a while, seemingly disrupting all the trains going into and through Stockport train station. We waited some more. The train manager announced that the police had told trains they could proceed… a few minutes later the finally moving train came to an abrupt halt, apparently because the person who, it turned out was still on the tracks, had almost been hit. The train staff seemed surprised and mildly shaken up, unable to explain why the police had given the order to move. 

I eventually made it to Manchester, albeit after the last tram to the north had departed. As a naturally curious person, I wanted to find out more about what had happened, not least of all to clarify the slightly weird Benny Hill-esque images I was left with following these repeated invocations of police “in pursuit of” this mysterious “woman on the tracks” over half an hour. Plus what the hell were the police doing telling the train to proceed when she was still on the tracks? If it was a mistake, I was curious about why exactly they thought their pursuit had ended when they hadn’t arrested her. If it wasn’t a mistake, it seemed an inexcusable and possibly illegal action, both in terms of harm to the woman and the psychological violence potentially inflicted on a train driver.

But I couldn’t find anything. I searched local newspapers but nothing. I searched social media but could only find my own tweet and the blandly descriptive disruption update on national rail enquiries. My point in recounting this story is not to stress the intrinsic interest of the situation itself. It’s not particularly interesting and you likely had to be there to have any concern. Rather, I’m interested in understanding the character of my frustration at being unable to find what I was looking for through digital means. It’s something I thought back to yesterday, when I was looking for a particular clip from the Simpsons to make a point in a conversation I was having with someone, but could not find it no matter how hard I looked.

In both cases, my behaviour revealed an implicit expectation concerning the extent of digitalisation. In the first case, that an incident which presumably delayed hundreds of people under (vaguely) mysterious circumstances would inevitably generate some digital record. In the second case, a memorable incident from a popular tv show would surely have been uploaded to a video sharing site. My frustration, though mild, stems from an encounter with the incompleteness of digitalisation. 

These thoughts are extremely provisional but I’d really welcome feedback. 

As the workings of civil society are being disrupted by the challenges of ‘alternative facts’, ‘fake news’ and notions of post-truth, Cosmopolitan Civil Societies Journal has decided to devote a special issue to this topic. Our approach is broad; the flow of information is fundamental to civil society and that flow and its interactions with the structures of society and the individuals in society takes many forms. The following list is by no means exhaustive: Journalism (and fact checking); Cultural Studies and the World of Make-Believe; the scientific record and predatory publishing; climate change and climate deniers; Civic literacy and democracy; Public Relations and Spin; social media, experience and opinion; state strategy and astro-turfing; the new right and post-facts; dramaturgy of post-factoids …
We are calling for papers between 4,000 and 8,000 words which reflect in some way on the concepts of alternative facts/fake news/post truth either on our understandings of civil society or on professional practices within civil society.

Our deadline for submission is Friday 31 March. Decisions on acceptance will be communicated by 28 April. The issue will be published in July 2017.

See the journal at:

http://epress.lib.uts.edu.au/journals/index.php/mcs

For more information please contact Hilary Yerby at: Hilary.yerby@uts.edu.au

From The Revenge of the Monsters of Educational Technology, by Audrey Watters, loc 1187:

Many of us in education technology talk about this being a moment of great abundance—information abundance—thanks to digital technologies. But I think we are actually/ also at a moment of great austerity. And when we talk about the future of education, we should question if we are serving a world of abundance or if we are serving a world of austerity. I believe that automation and algorithms, these utterly fundamental features of much of ed-tech, do serve austerity. And it isn’t simply that “robot tutors” (or robot keynote speakers) are coming to take our jobs; it’s that they could limit the possibilities for, the necessities of care and curiosity.

Understanding this relationship between austerity and abundance strikes me as a crucial question of political theory. One which we evade if we reduce the former to the latter or vice versa, seeing abundance as negating austerity (as Tyler Cowen does, for instance) or austerity as negating abundance (by robbing it off its  social significance as a cultural change).

This great post by Martin Weller takes issue with the recent click bait published by the Guardian Higher Education’s anonymous academics series. He argues that they perpetuate an outdated stereotype of academic labour which has no relationship to the reality:

There are undoubtedly more, but when you piece these three together, what you get is a picture of an academic in the 1970s (Michael Caine in Educating Rita maybe) – shambolic, aloof, and unfettered by the concerns of normal working life. It’s a romantic image in a way, but also one that lends itself to the ‘ivory tower’ accusation. It is also about as representative now as the fearful matron in charge of a typing pool is to office life.

These might be the myths non-academics affirm about academics. But what are the myths academics propound about themselves and their labour? To what extent are these myths entrench by an unwillingness to come to terms with the managerial denigration of academic labour and the curtailment of professional autonomy?