In the last year, Facebook Live has been plagued by occasional headlines reporting on shocking instances of violence being streamed through the platform. The sporadic quality of these reports easily creates an impression that this is exception. There have always been violent crimes, right? Therefore it stands to reason that the spread of the platform would inevitably create occasional incidences in which it featured in such crimes. However as this BuzzFeed analysis makes clear, such incidences have been a regular occurrence on the platform since its inception:

Facebook Live has a violence problem, one far more troubling than national headlines make clear. At least 45 instances of violence — shootings, rapes, murders, child abuse, torture, suicides, and attempted suicides — have been broadcast via Live since its debut in December 2015, a new BuzzFeed News analysis found. That’s an average rate of about two instances per month.

When it launched, Facebook CEO Mark Zuckerberg touted Live as “a great medium for sharing raw and visceral content.” But from its inception and over thee many months that followed that became darkly true — to terrible effect. Videos of shootings, murders, suicides, and rapes began to show up on Facebook with alarming regularity.

What should we make of this? There are important issues raised about the accountability of platforms, as Facebook have refused to comment on this trend and instead simply pointed to past statements by Mark Zuckerberg and their committed to hiring new moderators. But there is enough evidence of a relationship between Facebook Live and violence that we should take seriously the possibility that in some cases the platform might be contributing to crime generation rather than merely reflecting it.

The disturbing possibility invoked in the article is that there is a mimetic dynamic at work, as the possibility for immediate notoriety and a growing list of exemplars incline people towards horrific acts which might have remained embryonic without these two conditions:

Some criminologists worry that broadcasts of violent crimes to Facebook Live might lead perpetrators of violent crime to view the platform as a means of gaining infamy, bypassing the traditional filter of the media. “The most likely impact is that it’s going to be a model of how to distribute and immortalize your act,” Ray Surette, a criminal justice professor at the University of Central Florida, told BuzzFeed News.

Jacqueline Helfgott, chair of the Criminal Justice Department at Seattle University, agreed. “It’s making it easier for people to gain notoriety instantly without gatekeepers,” she told BuzzFeed News. “I definitely think there’s a mimetic effect.”

The mainstream media have previously been gatekeepers to such notoriety. But now it’s possible to achieve it through virality, assuming moderators prove unable to near immediately remove such videos. There’s an incredibly bleak book by Franco Berardi, Heroes: Mass Murder and Suicide, which offers useful resources for making sense of this possibility. He argues that mass murderers are “the extreme manifestation of one of the main trends of our age” involving “people who are suffering themselves, and who become criminals because this is their way both to express their psychopathic need for publicity and also to find a suicidal exit from their present hell” (pg 3).

In such crimes we see a “violent acting out, as disconnected from a conscious elaboration: just do it” (pg. 56) but one licensed by a desire for infamy. It is this fame which motivates the act, offering the possibility of transcending one’s own subordination by living on forever, showing them forever:

Like the large majority of the generation that has grown up in the Neoliberal decades, the young Eric Harris is totally persuaded that the strong have the right to win and predate. It is the natural philosophy that he has absorbed in the social environment in which he was educated, and it also the underlying rationale of the video games that he loved to play. But the young man knew very well that he was not going to be a winner in the social game. Instead, he decides that he will be a winner for a moment; I’ll kill and I’ll win; then I’ll die. The murderous action is conceived as revenge for the humiliation that he has suffered in the daily game of competition. (Pg 50)

The infamy is what ensures that victory will live on. It cannot be reversed. Through their actions they achieve the status they were constantly seeking yet could never receive within life. As with much work of this type, it’s speculative social science of a sort that can be critiqued on empirical grounds. But the underlying thesis is one we should take seriously: the promise of infamy coupled with the release of violently acting out is a socially produced temptation in a profoundly unequal society which valorises ‘winners’ while attacking ‘losers’. These exceptional acts need to be understood as extreme responses to social conditions which are pervasive.

If there is any accuracy to these claims, we ought to be extremely concerned about Facebook Live. The barriers to entry for Berardi’s ‘heroes’ are lowering radically: the pathway to infamy can be found in the everyday object of the smartphone, rather than being reliant on recognition from the mass media. What might seem like exceptional cases, inexplicable in terms of wider social forces, could in fact herald the dark future of mediatization.

While many see the term ‘curation’ as modish and vague, I see it as an important concept to make sense of how we can orientate ourselves within a changing cultural landscape. However I can sympathise with the thrust of these objections, in so far as they take issue with a sense of curation tied in with the worship of the new. Such a use of the term is possibly dominant, framing the curatorial imperative (selecting from available variety through filtering, commentary and evaluation) as a specialisation which emerges to cope with the late modern world. If we frame curation in this way, we miss out on the opportunity to explore how it has changed over time. See for example Nick Couldry’s Media, Self, World loc 1732:

Some literary cultures have been distinguished by the richness of their practices of commentary: the Jewish tradition of cabbala is frequently cited, but the ancient world’s general scarcity of textual objects meant that written manuscripts often reached people with the commentary of previous readers’ (so-called ‘scholiasts’) embedded within them, a tradition which reaches us now via the comments written in medieval versions of Greek texts.
Now we are entering an age of commentary for the opposite reason: because of the almost infinite proliferation of things to read and look at, we need to send signals to help each other select from the flux. At the same time, and for related reasons, our ability to send comments and signals has been massively extended by digital media: we take it for granted that by emailing or uploading a link we can point at something interesting we have just read and so alert someone on the other side of the world. The scope of commentary as a practice has been massively enlarged.

It is important that we can address problems and opportunities created by specific technologies without circumscribing our accounts in a way that limits them to these technologies. If we do so, we fail to recognise the continuities and we are inevitably left with anaemic conceptions of the human and the social which tend to be exhausted by the social-technical. From loc 1534 of Couldry’s book:

From searching, other practices quickly develop: practices of exchanging information by forwarding weblinks to family, friends or work colleagues, warehousing sites that collect recommendations from users so other users can narrow down their search practice (Digg, etc.), and tools for pre-ordered searches (RSS feeds and other alerts). These various search-enabling practices are increasingly prominent in everyday life as people seek to optimize their access to the vastly expanded flow of potentially relevant information. Their dispersed agency (anyone can forward a link or signal that they ‘like’ a post) contrasts with earlier centuries’ ways of disseminating interesting material: for example, the ancient and medieval world’s florilegia produced by groups of scholars, often in monasteries, who collected interesting quotes from otherwise obscure books into new volumes. Now not only do individuals (from their computers or phones, wherever they are) make the recommendations, but system interfaces, such as Digg and reddit, enable them to recommend cumulatively. Some commentators hope that ‘collaborative filtering’ and other collective forms of information sorting can challenge the dominance of Google and even create new forms of social bond.

How do we ensure we recognise these contrasts? How can we explore them in a way which allows us to productively theorise continuities and differences? There’s a fascinating meta-theoretical challenge here which I’d like to engage with seriously in future.

That’s the question I’ve been asking myself when reading through two books by Nick Couldry in which he develops a materialist phenomenological approach to understanding social reality. The first is The Mediated Construction of Social Reality (with Andreas Hepp) and the second is Media, Society, World. It’s in the latter book that he considers the representational power of media. From loc 683:

Media institutions, indeed all media producers, make representations: they re-present worlds (possible, imaginary, desirable, actual). Media make truth claims, explicit or implicit: the gaps and repetitions in media representations, if systematic enough, can distort people’s sense of what there is to see in the social and political domains.

There is a political economy underpinning this, in terms of the capacity to make such representations and the gains accruing from this capacity. The common reference points which accumulate as a consequence serve a broader economic purpose. From loc 701:

However, if basic consumer demand –for fashion, music, sport –is to be sustained at all, it requires ‘the media’ to provide common reference points towards which we turn to see what’s going on, what’s cool.

The interests and influence in play here have been crucial to the unfolding of late modernity. Media has been a site through which power has consolidated. What we are seeing with ‘post-truth’ is a deconsolidatiob of this apparatus, taking place at a number of different levels. From loc 886:

Representations matter. Representations are a material site for the exercise of, and struggle over, power. Put most simply, our sense of ‘what there is’ is always the result of social and political struggle, always a site where power has been at work. 150 But fully grasping this in relation to media is difficult: because the role of media institutions is to tell us ‘what there is’ –or at least what there is that is ‘new’ –media’s work involves covering over its daily entanglement in that site of power. Media aim to focus populations’ attention in a particular direction, on common sites of social and political knowledge. Media institutions’ embedding as the central focus of modern societies is the result of a history of institutional struggle that is becoming more, not less, intense in the digital media era. It is essential to deconstruct the apparently natural media ‘order’ of contemporary societies.

A really fascinating discussion between Kristi Winters and The Wooly Bumblebee (HT Philip Moriarty). The latter’s experience could be seen as a model for de-radicalisation in the more toxic spaces within social media. An important reminder that platform incentives might encourage this behaviour but they don’t necessitate it. Furthermore, just because someone has come to act a given way doesn’t mean they will always act that way.

The term ‘curation’ has got a bad press in recent years. Or rather the use of the term beyond the art world has. To a certain extent I understand this but I nonetheless always feel the need to defend the term. There are a few reasons for this:

  • In a context of cultural abundance, selection from variety becomes important within a whole range of contexts. Inevitably, it is something most people within these contexts will do most of the times. But ‘curation’ is becoming a specialised activity, even if detached from a specific social role.
  • I’m prone to thinking of what I do, at least some of the time, as curation. I spend quite a lot of time each week sorting through mailing lists, newsletters, websites, blogs and social media to identify relevant content for The Sociological Review’s Twitter and Facebook feeds. This is 46 social media posts per day. I’ve also shared something on Sociological Imagination daily for almost seven years. I don’t particularly care what anyone else calls it but, as far as I’m concerned, doing it effectively is a skilled activity and ‘curation’ is the term I’ve taken to using.
  • The modern sense of the word ‘curation’ rests on a specific set of institutional arrangements which are themselves relatively recent. The word has a longer history, emerging from the Latin curator (“overseer, manager, guardian“) and what many construe as a misapplication could just as easily be taken as a further shift in its use. Language is dynamic and the anti-‘curation’ rhetoric is an attempt to police its change, albeit not a particularly significant or pernicious one.

Ultimately, I don’t care if people reject this use of the term ‘curation’. I do care if people reject what the term ‘curation’ comes to designate. I don’t dispute it is often used in a vacuous way, but it is not always used this way. It is nebulous and modish but the terms which emerge in relation to socio-cultural transformations often are.

It’s the socio-cultural changes which interest me, the abundance digitalisation is giving rise to and the epistemic fog which emerges as a result. To talk of ‘curation’ is a facet of that conversation and if people want to reject its use, I hope they’ll offer an alternative language for talking about selection from abundance as an institutionalised function within digital capitalism.

This essay on ‘the cult of cruelty’ has some interesting points to make about the role of what danah boyd calls persistence and searchability in facilitating incivility online. It makes it possible to trawl through someone’s activity, enabling a degree of engagement with choices and representations that would not otherwise be possible:

I’ve been thinking about this a lot lately — the ways in which people exact their hurt. It’s common for people to subtweet about their hate-follows and hate-reads. Nothing distinguishes between the hate cultivated for people we know as opposed to strangers — we’re all fair game for someone else’s vitriol. People have no problem playing armchair therapist; they analyze our lives from a computer screen and then proceed to deliver play-by-play commentary on how we should live our lives based on how they live theirs. Many have come to believe that an online representation of one aspect of our lives is the complete story, the whole of our lives. Who we are, the content of our character, is reduced to what we choose to publish. The choices we make — from what we wear to how we parent and whom we love — should be obvious based on the collective’s personal experience and we’re admonished in text or in forums for “not getting it”. We crave authenticity yet we vilify others for their public missteps, for being human. People talk smack behind our backs to then kiss-kiss, hey, how are you? to our face. People leave hateful comments tearing apart our appearance: Why is she naked in every picture on Instagram…ugh! Who does she think she is? Why does she wear such unflattering clothes? If she didn’t want to hear about how bad she looks she shouldn’t be posting pictures of herself online. Apparently, being public is an open invitation for hate, and it’s frightening that groups exist on the Internet devoted to the care and feeding of that hate.

It also makes it possible to trawl back through the incivility that has been directed at us:

We live in a country that espouses free speech, but many are forced into silence in fear of the hate avalanche. In a private Facebook group, many women talk about not reading the comments of their published articles out of self-preservation. “Don’t read the comments is a constant refrain. Women leave social media because they’re beaten down by people in fear of losing their privilege. A whole group of people has been reduced to a patronizing “snowflake” moniker because of their inability to toughen up, and it’s as if the Internet has become Darwinian in the sense that only those who hate, and those who can withstand and endure that hate, survive. A few years ago, I was the subject of a man’s ire, someone whom I believe I knew (or at least had come into contact with during my agency career, which makes the whole situation that much more unsettling), who wrote about how much he hated me because I stood up for women who had been ridiculed online because of their appearance. Fifteen years ago, a small circle of literary bloggers posted cruel blind items about me and I remember being at work, in front of my computer, reading these posts and my whole body going numb.t

There’s an excellent overview of ‘hate reading’ here:

Underlying all this is a weirdly common human tendency toward “hate-reading.” Call it that for short, at least, because it also includes “hate-listening” and “hate-watching.” In short, many people seem strangely drawn to material that they know, even before they’re exposed to it, will infuriate them. And hate-reading in its purest form involves not just seeking out the aggregated fodder of Media Matters or Newsbusters, but actually going straight to the source: a conservative mainlining Keith Olbermann; a liberal recklessly exposing herself to a Rush Limbaugh monologue.

A lot of us do this, but why? No one knows for sure, but there are a few potential explanations. One is that hate-reading simply makes us feel good by offering up an endless succession of “the emperor has no clothes” moments with regard to our political adversaries. In this view, we specifically seek out the anti-wisdom of whoever appears dumbest and most hateful as a means of bolstering our own sense of righteousness. “If the commentary is dumb enough, it may actually have a boomerang effect in that it reassures us that our opponents aren’t very smart or accurate,” said Mary McNaughton-Cassill, a media psychologist at the University of Texas San Antonio.

There’s a fascinating footnote in Radio Benjamin, loc 395-410, discussing Adorno’s description of Benjamin’s ideas as ‘radioactive’:

The full sentence reads, “Everything which fell under the scrutiny of his words was transformed, as though it had become radioactive,” … Although Adorno’s metaphor uses a different register of boundary crossing, the German radioaktiv, like the English radioactive, shares with Rundfunk, or radio, a connotation of atmospheric spreading, dispersal, and uncontrolled movement across and within borders and lines of containment; the airwaves, like the air or the atmosphere, represent a quasi-invisible scene or medium of transmission. While the German does not directly imply the coincidence of these two (roughly contemporary) modes of radiality, the notion of Benjamin’s gaze, and from there his work, effecting a radioactive transformation suggests the potentially dangerous, if also exciting and new, power of radio and its power to broadcast.

Radioactive ideas effect a transformation. Viral ideas simply pass through. The logic of social media platforms too easily inclines us towards a concern for virality. What we should aim for is to use their affordances to ensure radioactivity, even if this registers much less impressively on a numerical level.

In The Making of Donald Trump, David Johnston identifies the tactics used by Trump to deflect inquiries into his many shady dealings and questionable decisions. Sometimes this is a matter of outright threats, with an enthusiasm for litigation (1,900 suits as plaintiffs coupled with an explicitly articulated philosophy of vengeance proving a dangerous combination for any who dare to cross him. But somewhat contrary to his public image as a blundering fool, he is often much more subtle than this, engaging in strategies of deflection and misdirection with all the deftness of the most accomplished public relations manager. In other cases, it just becomes weird, with Trump willing to publicly deny that a recording he had previously admitted to be of his own voice was anything other than a hoax:

This combination of viciousness, skilfulness and brazenness has left him insulated from meaningful scrutiny. But what has he averted in this way? What might have happened but hasn’t? On page 154 Johnston offers a description which has caught my imagination:

Together, these strategies – muddying the facts and deflecting inquiries into past conduct – help ensure that Trump’s carefully crafted public persona will not be unmade. He will not suffer the curtain to be pulled back to reveal a man who tricked society into thinking he was all wise and all powerful.

This public persona which has been crafted, sometimes deliberately while at other times impulsively, remains intact. I’m interested in what such a ‘pulling back of the curtain’ requires to be effective: the sustained attention of an audience, a sufficient familiarity with the person(a) in question, a prolonged campaign to sort fact from fiction and a lack of contestation concerning this process of sorting.

What is being framed somewhat unhelpfully as a ‘post-truth era’ are the conditions under which this ceases to be possible. There’s lots of ways in which we could try and explain them, not all of which are necessarily mutually exclusive. The collapse of authority in late modernity. The acceleration of communication. The weakening of journalism and the dominance of public relations. Theories of social change should be able to account for the specifics of such cases, rather than simply allowing them to be rendered thematically.

In his InfoGlut, Mark Andrejevic takes issue with the assumption that fostering ‘disbelief’ or ‘challenge’ is necessarily subversive.  As he puts it, “strategies of debunkery and information proliferation can work to reinforce, rather than threaten, relations of power and control” (loc 293). Recognising this in the abstract is important but I intend to read more about the specific cases in which these tactics are used regressively, as I’m increasingly fascinated by the extent to which these tactics are informed (or not) by epistemological and ontological understandings (even if these words are not used).

Under these conditions, what  Andrejevic describes as the ‘big data divide’ seems ever more prescient by the day. From loc 464:

The dystopian version of information glut anticipates a world in which control over the tremendous amount of information generated by interactive devices is concentrated in the hands of the few who use it to sort, manage, and manipulate. Those without access to the database are left with the “poor person’s” strategies for cutting through the clutter: gut instinct, affective response, and “thin- slicing” (making a snap decision based on a tiny fraction of the evidence). The asymmetric strategies for using data highlight an all- too- often overlooked truth of the digital era: infrastructure matters. Behind the airy rhetoric of “the cloud,” the factories of the big data era are sprouting up across the landscape: huge server farms that consume as much energy as a small city. Here is where data is put to work – generating correlations and patterns, shaping decisions and sorting people into categories for marketers, employers, intelligence agencies, healthcare providers, financial institutions, the police, and so on. Herein resides an important dimension of the knowledge asymmetry of the big data era – the divide between those who generate the data and those who put it to use by turning it back upon the population. This divide is, at least in part, an infrastructural one shaped by ownership and control of the material resources for data storage and mining. But it is also an epistemological one –a difference in the forms of practical knowledge available to those with access to the database, in the way they think about and use information.


From The Revenge of the Monsters of Educational Technology, by Audrey Watters, loc 1187:

Many of us in education technology talk about this being a moment of great abundance—information abundance—thanks to digital technologies. But I think we are actually/ also at a moment of great austerity. And when we talk about the future of education, we should question if we are serving a world of abundance or if we are serving a world of austerity. I believe that automation and algorithms, these utterly fundamental features of much of ed-tech, do serve austerity. And it isn’t simply that “robot tutors” (or robot keynote speakers) are coming to take our jobs; it’s that they could limit the possibilities for, the necessities of care and curiosity.

Understanding this relationship between austerity and abundance strikes me as a crucial question of political theory. One which we evade if we reduce the former to the latter or vice versa, seeing abundance as negating austerity (as Tyler Cowen does, for instance) or austerity as negating abundance (by robbing it off its  social significance as a cultural change).

Rarely can a film have been as timely as Denial. It tells the story of the libel action the holocaust denying historian David Irving took against Deborah Lipstadt and her publisher, alleging that she had damaged his professional reputation as a historian by claiming he had wilfully distorted evidence. The film recounts the events leading up to the trial, before focusing on the trial itself and ending with the judge’s ruling that:

Irving has for his own ideological reasons persistently and deliberately misrepresented and manipulated historical evidence; that for the same reasons he has portrayed Hitler in an unwarrantedly favourable light, principally in relation to his attitude towards and responsibility for the treatment of the Jews; that he is an active Holocaust denier; that he is anti-Semitic and racist, and that he associates with right-wing extremists who promote neo-Nazism…[4][65] therefore the defence of justification succeeds…[5] It follows that there must be judgment for the Defendants.[66]

The film seems remarkably salient at a time when the liberal punditry seems to have uniformly endorsed the notion that we have entered a post-truth era, concisely defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief“. The importance of truth, the urgency of fighting for it, runs through the film and is explicitly invoked in the framing of it as a cultural product, as Rachel Weisz makes clear here: “It’s a true story, it’s a fight for truth and justice“.

The writer David Hare expands on this point in the same clip, explaining how “it’s not based on a true story, it is a true story … the words from the trial are the exact words. I don’t attribute to David Irving any line that he is not on record as having said, everything he says, we know he said“. It was great to discover this because I found the trial scenes riveting, though found it hard to wonder if the whole thing would have worked better on stage. The film seems to have underwhelmed critics, rather unfairly from my point of view, perhaps suggesting it was motivated by a commitment to realism of a sort liable to prove underwhelming on the big screen. However what struck me most about the film was the epistemological confusion underlying it, something which I think reflects a lot about the contemporary discourse of ‘post-truth’ and its limitations.

The avowed realism of the film obscures the inevitable cuts that the constraints of story telling necessitate. Irving had sued another historian at the same time, though the case did not go to court. He threatened a further historian with libel if passages concerning him weren’t removed from an upcoming book, prompting an American edition to be published with them but their erasure from the British edition. My point is not to criticise the film for excluding these details, despite their obvious relevance to the story, as much as to highlight the exclusions inherent in narrative. Likewise, with the court case itself, where the selection of a few incidents from a long trial were expertly used to dramatic effect. Again, these aren’t criticisms, just a reminder that even factual narratives (a term I prefer to ‘true story’) inevitably entail selecting from the pool of available facts, within the (media and genre specific) constraints of effective story-telling.

Much of the film can be read in terms of rallying forces for a defence of truth. The drama of the film rests on success in this endeavour, after overcoming much initial adversity. But framing the hard-drinking, hard-thinking Scottish barrister as a hero sits oddly with the commitment to truth in the film. After all, he’s lionised for his rhetorical skills, his capacity to pick apart the authority of Irving in a performatively compelling way. His most succesful tactics have nothing to do with the presentation of evidence, but rather involve getting under Irving’s skin in order to unsettle and undermine him. The concern here is not truth but persuasion. Specifically, the persuasion of a solitary judge, after Irving the litigant was persuaded to dispense with the jury because both sides agreed that the common folk could not be trusted to adjudicate on the truth when the relevant facts were as complex as they were in this case. Furthermore, the only thing that ensures the barrister is not cast as a mercenary is his deep commitment to this truth. This is slowly established over the course of the film, with Lipstadt eventually discovering that this is not just ‘another brief’ for him after all.

What made this film impressive to me was the way in which it explored the mechanics of persuasion in court, specifically how it was established convincingly that Irving had wilfully misrepresented evidence in order to establish the case for holocaust denial. In other words, it concerned the discursive machinery through which facts are consecrated and rendered socially efficacious. The apparent narratological inevitably of this being accompanied by a paean to truth speaks volumes about what has come to be accepted as ‘post-truth’. We might speak more accurately of post-fact. This is how Will Davies framed it in a New York times essay:

Facts hold a sacred place in Western liberal democracies. Whenever democracy seems to be going awry, when voters are manipulated or politicians are ducking questions, we turn to facts for salvation.

But they seem to be losing their ability to support consensus. PolitiFact has found that about 70 percent of Donald Trump’s “factual” statements actually fall into the categories of “mostly false,” “false” and “pants on fire” untruth.

For the Brexit referendum, Leave argued that European Union membership costs Britain 350 million pounds a week, but failed to account for the money received in return.

The sense is widespread: We have entered an age of post-truth politics.

As politics becomes more adversarial and dominated by television performances, the status of facts in public debate rises too high. We place expectations on statistics and expert testimony that strains them to breaking point. Rather than sit coolly outside the fray of political argument, facts are now one of the main rhetorical weapons within it.

The declining efficacy of facts is understood to be problematic because it undermines appreciation of truth. But reality always permits of multiple characterisations. As Roy Bhaskar put it on pg 55 of Reclaming Reality, “facts are things, but they are social not natural things, belonging to the transitive world of science, not the intransitive world of nature”. Facts are produced through interventions in the world, drawing on the labour of others and applying conceptual tools we rarely built ourselves. This is why a serious discussion of someone like Irving cannot avoid interrogating his proclaimed status as a professional historian, what this means and how it should shape our assessment of his capacity to marshal facts in authoritative ways. Indeed, this was crucial to making the case against him.

But if we see facts as self-grounded things, already made and waiting in the world to be discovered, it becomes difficult to acknowledge this. This might not matter when ‘our’ facts are socially efficacious, happily endorsed by all those we encounter and reflected back to us as common sense in the culture we engage with. But when these start to break down, the construction of ‘truth’ faces a fundamental tension: if facts are given then conflict over them must in some way reflect non-factual considerations, but if non-factual considerations consistently influence ‘matters of fact’ then facts cannot be given. This creates a crisis when we reach a situation in which facts have been ubiquitously weaponised. As Davies put it, “If you really want to find an expert willing to endorse a fact, and have sufficient money or political clout behind you, you probably can”.

This inconvenient truth could be ignored as long as there was a consensus in place. One which has now broken down, with the apparent mystery of our ‘post-truth’ era going hand-in-hand with a profound mystification of the political dimensions to how the consensual era of ‘truth’ preceding it was established. My point in writing this isn’t to preach constructionism. I share the ethos of Bhaskar’s book, one of the most powerful works of philosophy I’ve read: reclaim reality. Reclaiming reality involves recognising the reality of social construction, but resisting the dissolution of ‘truth’ into this. Figures like Irving thrive in the space opened up by the antinomies of (post)truth. If we reclaim reality, we can starve them at an epistemological level, before defeating them at a political level.

In John Thompson’s Merchants of Culture, there’s an interesting remark about the structural position of first time authors which I think has wider purchase. From pg 200:

Ironically, in a world preoccupied by numbers, the author with no track is in some ways in a strong position, considerably stronger than the author who has published one or two books with modest success and muted acclaim, simply because there are no hard data to constrain the imagination, no disappointing sales figures to dampen hopes and temper expectations. The absence of sales figures sets the imagination free. The first-time author is the true tabula rasa of trade publishing, because his or her creation is the book for which it is still possible to imagine anything and everything.

A world where metrics are ubiquitous is a world where imagination has died. When everyone has a track record, the space to imagine someone’s future as radically different from their past collapses.

An important idea offered by Mike Caulfield. The embrace of frictionless sharing and the relentless pursuit of engagement have created the problems which are now being naturalised by the emerging ‘did Facebook lead to Trump’ discourse:

We have prayed at the altar of virality a long time, and I’m not sure it’s working out for us as a society. If reliance on virality is creating the incentives to create a culture of disinformation, then consider dialing down virality.

We know how to do this. Slow people down. Incentivize them to read. Increase friction, instead of relentlessly removing it.

Facebook is a viral sharing platform, and has spent hundreds of millions getting you to share virally. And here we are.

What if Facebook saw itself as a deep reading platform? What if it spent hundreds of millions of dollars getting you read articles carefully rather than sharing them thoughtlessly?

What if Facebook saw itself as a deep research platform? What if it spent its hundreds of millions of dollars of R & D building tools to help you research what you read?

I like the concept of ‘social editor’, though think it has to be treated carefully:

In an earlier post for this blog, we argued that Facebook has crossed the line from being a mere host of user-created content to functioning as an editor of (professional) media content, at least for certain parts of its website, such as Trending Topics. However, it is also clear that Facebook is not an editor in a traditional sense, and certainly not in the sense that media law and policy are accustomed to. Facebook does not itself produce news, but it does aggregate news, it closes deals with media publishers for Instant Articles and even commissions content , for example for Facebook Live. Facebook establishes editorial and community guidelines – guidelines that apply to Facebook’s users, not to Facebook, as Facebook itself has no editorial mission. The social network plays a pivotal role in providing the edited recommendation service ‘Trending Topics’, with its ability to bring important issues to the attention of a wide range of users and to rank other topics into oblivion. Most importantly, social networks like Facebook organise the way in which the public debate around content takes place. It does so by collecting and integrating data from Facebook users into the recommendation process, by calculating popularity and shareability and by offering an entire architecture of tools for users to engage and share. This makes Facebook first and foremost a social editorthat exercises control not only over the selection and organisation of content, but also, and importantly, over the way we find, share and engage with that content.

A really useful starting point for Pew research on this, saved here for future use:

Digital news continues to evolve, pushed by a variety of innovations in recent years, from groundbreaking new technologies like virtual reality and automated reporting to experiments on social platforms that have altered campaign coverage. As journalists and media practitioners gather for the annual Online News Association Conference, here are 10 key findings from recent Pew Research Center surveys and analyses that show how these rapid digital shifts are reshaping Americans’ news habits

Saving this five part series to come back to properly later:

Money is chasing money. Podcast advertising expanded at a 48 percent rate last year, and it’s forecast to grow about 25 percent a year through 2020. By that point, it would be approaching half a billion dollars in annual ad revenue. That growth is rising from a small base, yes, but it’s very reminiscent of the old Interactive Advertising Bureau charts of Internet advertising we saw at the turn-of-the-century: tiny numbers growing explosively.