Towards the end of Kill All Normies, Angela Nagle discusses the chilling effect liable to ensue from the online harassment which journalists critical of the alt-right often now find themselves subject to. From pg 118:

Multiple journalists and citizens have described in horrifying detail the attacks and threats against those who criticize Trump or figures of the online Trumpian right, especially if the critic is female, black or Jewish, but also if they’re a ‘cuckservative’. They now have the ability to send thousands of the most obsessed, unhinged and angry people on the Internet after someone if they dare to speak against the president or his prominent alt-light and alt-right fans. Although the mainstream media is still quite anti-Trump, it would be naïve to think this isn’t going to result in a chilling of critical thought and speech in the coming years, as fewer and fewer may have the stomach for it.

Perhaps I’m being a pedant but I found myself frustrated by the phrase “ability to send” here. I’m not denying this possibility, in fact I’m fascinated by what I’ve come to think of as ‘shadow mobilisation‘, but it’s not obvious to me this is what happens here. There clearly isn’t anything approaching a command-and-control dynamic, something which I think Nagle wouldn’t dispute, hence we need to be careful about how we characterise the co-ordination and coercion which operates in different cases. I’ve rarely encountered an issue for which empirical research seemed so politically urgent.

Call for Papers – Edited Collection

Online Othering: Exploring the Dark Side of the Web

Editors: Dr Karen Lumsden (Loughborough University) and Dr Emily Harmer (University of Liverpool)

The Internet plays a vital role in many aspects of our social, political and cultural lives and in the early days of its expansion there was much enthusiasm for its potentially transformative role in providing a space for individuals to construct their identities, communicate with others and share ideas and concerns. A perhaps unanticipated consequence of these developments has been the extent to which some individuals and groups have used this freedom to engage in hateful or discriminatory communicative practices online in these loosely regulated spaces, often hiding behind the cloak of anonymity. For instance, women on Twitter and in the public eye have found themselves subject to online harassment, sexism and trolling, while the aftermath of the Brexit vote saw in a rise in reports of hate speech including racism, Islamophobia, and anti-Semitism, in both online and offline contexts.

This edited collection explores the othering and discrimination propagated and encountered by individuals online and in social media contexts and cultures. It problematizes and analyses the dichotomy presented between real and virtual worlds (and spaces) by exploring the motivations behind certain offending and othering behaviours, and the impact this has on the targets of online abuse and hate speech. This includes the extent to which online othering constitutes a new phenomenon and how the motivations for committing forms of cyber-abuse, cyber-hate, and othering relate to the expression of these attitudes and behaviours in the offline context.

It explores the extent to which forms of information and communication technologies facilitate, exacerbate, and/or promote, the enactment of traditional offline offences (such as domestic abuse and stalking). Finally, the collection addresses the role of the police and other agencies in terms of their interventions, and the regulation and governance of virtual space(s).

The edited collection is an output from a one-day conference on Online Othering hosted at Loughborough University. We are seeking additional contributions to the volume from scholars and researchers working in disciplines such as sociology, communication and media studies, criminology, political studies and/or gender studies.

Contributions should address the ways in which various groups and identities are subjected to othering in online environments. This can include news websites, social media platforms (i.e. Twitter, Facebook, Instagram, YouTube, etc.), blogs, and forums. We are also interested in contributions which explore othering across multiple contexts. Potential topics can include, but are not limited to:

  • Trolling and gendered online abuse/harassment;
  • Cyber-bullying or cyber-stalking;
  • Hate crime/speech online;
  • Homophobia and/or transphobia;
  • Online representations of disability;
  • Class bigotry;
  • Racism, Islamophobia, or anti-Semitism;
  • Sexting and/or revenge pornography;
  • Brexit, Trumpism and the rise of the ‘alt-right’.

The edited collection proposal is to be submitted to Palgrave as part of their Cyber-Crime series by Autumn 2017. For accepted submissions, the finalised chapters will need to be received by the end of September 2018.

Submissions:

Interested contributors should email a title, abstract (250 words) and biography (100 words) to both Karen Lumsden K.Lumsden@lboro.ac.uk and Emily Harmer E.Harmer@liverpool.ac.uk by 31 August 2017. Authors will be informed of decisions by 30 September 2017.

This essay on ‘the cult of cruelty’ has some interesting points to make about the role of what danah boyd calls persistence and searchability in facilitating incivility online. It makes it possible to trawl through someone’s activity, enabling a degree of engagement with choices and representations that would not otherwise be possible:

I’ve been thinking about this a lot lately — the ways in which people exact their hurt. It’s common for people to subtweet about their hate-follows and hate-reads. Nothing distinguishes between the hate cultivated for people we know as opposed to strangers — we’re all fair game for someone else’s vitriol. People have no problem playing armchair therapist; they analyze our lives from a computer screen and then proceed to deliver play-by-play commentary on how we should live our lives based on how they live theirs. Many have come to believe that an online representation of one aspect of our lives is the complete story, the whole of our lives. Who we are, the content of our character, is reduced to what we choose to publish. The choices we make — from what we wear to how we parent and whom we love — should be obvious based on the collective’s personal experience and we’re admonished in text or in forums for “not getting it”. We crave authenticity yet we vilify others for their public missteps, for being human. People talk smack behind our backs to then kiss-kiss, hey, how are you? to our face. People leave hateful comments tearing apart our appearance: Why is she naked in every picture on Instagram…ugh! Who does she think she is? Why does she wear such unflattering clothes? If she didn’t want to hear about how bad she looks she shouldn’t be posting pictures of herself online. Apparently, being public is an open invitation for hate, and it’s frightening that groups exist on the Internet devoted to the care and feeding of that hate.

It also makes it possible to trawl back through the incivility that has been directed at us:

We live in a country that espouses free speech, but many are forced into silence in fear of the hate avalanche. In a private Facebook group, many women talk about not reading the comments of their published articles out of self-preservation. “Don’t read the comments is a constant refrain. Women leave social media because they’re beaten down by people in fear of losing their privilege. A whole group of people has been reduced to a patronizing “snowflake” moniker because of their inability to toughen up, and it’s as if the Internet has become Darwinian in the sense that only those who hate, and those who can withstand and endure that hate, survive. A few years ago, I was the subject of a man’s ire, someone whom I believe I knew (or at least had come into contact with during my agency career, which makes the whole situation that much more unsettling), who wrote about how much he hated me because I stood up for women who had been ridiculed online because of their appearance. Fifteen years ago, a small circle of literary bloggers posted cruel blind items about me and I remember being at work, in front of my computer, reading these posts and my whole body going numb.t

There’s an excellent overview of ‘hate reading’ here:

Underlying all this is a weirdly common human tendency toward “hate-reading.” Call it that for short, at least, because it also includes “hate-listening” and “hate-watching.” In short, many people seem strangely drawn to material that they know, even before they’re exposed to it, will infuriate them. And hate-reading in its purest form involves not just seeking out the aggregated fodder of Media Matters or Newsbusters, but actually going straight to the source: a conservative mainlining Keith Olbermann; a liberal recklessly exposing herself to a Rush Limbaugh monologue.

A lot of us do this, but why? No one knows for sure, but there are a few potential explanations. One is that hate-reading simply makes us feel good by offering up an endless succession of “the emperor has no clothes” moments with regard to our political adversaries. In this view, we specifically seek out the anti-wisdom of whoever appears dumbest and most hateful as a means of bolstering our own sense of righteousness. “If the commentary is dumb enough, it may actually have a boomerang effect in that it reassures us that our opponents aren’t very smart or accurate,” said Mary McNaughton-Cassill, a media psychologist at the University of Texas San Antonio.

I’ve just finished reading the excellent This Is Why We Can’t Have Nice Things by Whitney Phillips. It offers fascinating insights into the evolution of ‘trolling’ as a practice, leading from its original form of sub-cultural self-identification to the diffusion of the label across the entire spectrum of online activities deemed to be anti-social. Her overarching thesis is that trolling is framed as an aberration relative to the mainstream culture, when in fact it represents the logic of that culture taken to its extreme. Trolling only makes sense against a background that facilitates it, such that trolls should be read as an inditement of contemporary culture rather than a threat to it. This diagnosis is most acute when it comes to broadcast media, with trolls expertly hacking the media for their own amusement in a way that takes advantage of the media’s propensity for those very things (misleading information, lack of understanding, morbid preoccupations and a deep need for attention) which trolls are seen as embodiments of.

Her operationalisation of ‘troll’ as a self-identity is an important part of the book. The problem I have with the contemporary use of troll is that it subsumes a wide range of behaviours into a singular pathologised description. To point this out is not to defend any of these behaviours, only to remind that we should not assume people do similar, or even the same, things for the same reasons. The diversity of trolling behaviours gets obliterated by the seemingly straight-forward designation of ‘troll’, something which I suspect many people now think they unproblematically recognise when they see it. But underlying ‘trolling’ we might find the urge to incite and manipulate for amusement (i.e. ‘troll’ in the self-identifying sense), online activists who see themselves as fighting a culture war through their keyboards, outpouring of hatred reflecting a generalised contempt for other human beings, the desperate externalisations of someone unable to cope or any number of other things. We need to recognise this variety at an ontological level while nonetheless remaining attentive to the epistemological and methodological problem of how, if at all, we are able to read back ‘offline’ motivations from ‘online’ behaviour.

Towards the end of the book, Phillips talks about her experience of out-trolling trolls. She recognises that this runs contrary to familiar advice “don’t feed the trolls”, something which I’ve always found to work just as well as face-to-face as on the internet:

This strategy—of actively trolling trolls—runs directly counter to the common imperative “don’t feed the trolls,” a statement predicated on the logic that trolls can only troll if their targets allow themselves to be trolled. Given that the fun of trolling inheres in the game of trolling—a game only the troll can win, and whose rules only the troll can modify—this is sound advice. If the target doesn’t react, then neither can the troll.But even this decision buys into the trolls’ game. The troll still sets the terms of their target’s engagement; the troll still controls the timeline and the outcome. (pg. 160)

I don’t quite follow the reasoning here. A refusal to engage only leaves the troll in control in a formal sense of the term. In practice, there isn’t a timeline or an outcome, with an enormous caveat I will get to later in the post. Instead, she details a strategy of out-trolling the trolls, performing an earnest response to their attempts at engagement in a way which reveals their own investment in trolling.

The dynamic shifts considerably if the target counters with a second game, one that collapses the boundary between target and troll. In this new game, the troll can lose and, by taking umbrage at the possibility, falls victim to his or her own rigid rules. After all, it’s emotion—particularly frustration or distress—that trips the troll’s wire. In most cases, the troll’s shame over having lost, or merely the possibility that he or she could lose, will often send the troll searching for more exploitable pastures. I frequently utilized this strategy in my own dealings with random anonymous trolls, particularly on my quasi-academic blog. (pg. 160)

I’d like to have seen more example of what she means here but I find it an intriguing idea. As I understand it, her notion of ‘trolling rhetoric’ entails seeking to provoke another person to express their concerns in a way deemed to be excessive, revealing what is taken to be their over-investment in their online activity. Underlying this is a belief that “nothing should be taken seriously, and therefore … public displays of sentimentality, political conviction, and/or ideological rigidity” are seen as a “call to trolling arms”, with the ensuing trolling often understood in an explicitly pedagogical way. The lulz enjoyed through this represent a “pushback against any and all forms of attachment” but, as she notes, trolls themselves are deeply attached to lulz (p. 25). There’s a power in revealing this attachment, inciting trolls to perform it through the very rhetorical strategies through which they seek to dominate others. Ignoring them leaves the troll unmoved, engaging in this way reveals the deep paradox at the heart of their behaviour.

Phillips recognises how contentious such a strategy can appear, honestly recounting her own ambivalence about the possibility. It nonetheless has a certain appeal though, specifically the idea that we might “troll better, and to smash better those who troll us”But there are two huge caveats to its employment in the academic context within which and for which I’m writing. Firstly, how would university departments and communications offices respond to examples of ‘out trolling’? The evidence we have suggests not very well. Secondly, do we have any reason to assume that those who are increasingly targeting academics online represents trolls in this self-identified sense? I think the argument offered by Phillips is deeply plausible but suspect it only holds true for those who share this sub-cultural identity. Those who, for instance, see what they do as activism are much less likely to be moved by it and engagements of this could be deeply counter-productive.

In the last few months, I’ve begun to seriously plan a much more sophisticated follow-up to Social Media for Academics, investigating the implications of social media for academic labour. A crucial aspect of this, which seems likely to become much more so with each passing year, concerns the toxicity of many of the online environments in which academics are participating. If academics increasingly find themselves expected to use social media as a means of demonstrating engagement or at least signalling engagement-willingness then the toxicity of these environments will become an increasingly central labour issue.

My fear is that we will have the worst of both worlds. Academics will be coerced outwards into these online environments under the sign of ‘impact’, while finding themselves blamed if anything they do online attracts disapprobation for their employer. It’s easy to imagine how the moralism we see lurking beneath the impact agenda (those who claim not to ‘get it’ should be ‘ashamed’ as I recently heard an extremely senior person say) could find similar expression in managerial expectation of social media use. On our present trajectory, the likely outcome will be an individualised one: take responsibility for your own engagement and take the blame if you bring about any perceived damage to the corporate brand. This problem is compounded because, as Tressie McMillan Cottom puts it “the risks and rewards of presenting oneself “to others over the Web using tools typically associated with celebrity promotion” (Barone 2009) are not the same for all academics in the neo-liberal “public” square of private media.” Far from counteracting exclusion in higher education, social media for academics is amplifying the risks for those already marginalised.

As an example of how this is developing, consider this dispiriting reflection on being an academic video blogger on YouTube which Philip Moriarty passed on to me:

One of the main reasons why I think the promise of YT as a place where intelligent life might flourish is failing is the well-documented level of trolling and hatred that permeates the site, and which threatens to silence any but the most obnoxious or innocuous voices. I stopped making regular videos a couple of years ago when the vitriol I was receiving for having the temerity to make unpopular content spilled over into my personal life. In addition to receiving the usual grammatically-challenged insults and thinly-veiled threats the university I was working at was also contacted several times by folk demanding my removal. Eventually these ‘downsides’ to being an academic on Youtube outweighed the benefits and I gave up making public videos entirely.

And it isn’t just me. Over the past three years I have known four other academics leave Youtube for reasons very similar to my own. These were folk who were similarly motivated to bridge the gap between ‘town and gown’, between universities (which are often seen as elitist) and the wider world represented on social media. These people wanted to contribute their knowledge and also to learn from the contributions of others. They wanted to find ways to speak and to listen in ways which were more inclusive, and which the diverse communities on Youtube seemed to be able to offer. These fine people, like myself, became disheartened by the inability of YT to foster anything but the lowest common denominator, the most clickbaity, the most provocative, the most crudely entertaining, and the failure of the platform to support those who wanted to raise the bar.

Some might say (and indeed have said) that this toxicity is just a natural part of the online ecology and we should grow a thicker skin, or not feed the trolls, or any of the other platitudes that are trotted out to excuse bad behaviour, but I don’t think that’s good enough. When the comment section under a video is two thirds insult or threat then the value of that comment section drops to zero. No one with anything to contribute wants to be part of it. When you have to wonder if your latest video will prompt some faceless anti-intellectual gonk to contact your employer then the chilling effect takes hold and you censor yourself, (God forbid you should talk positively about feminism, or BLM, or the representation of women in video games). The number of eyeballs on the site might increase but the I.Q. of the site goes down.

https://medium.com/@fredmcv/intelligent-life-on-youtube-aa46f4404861#.37wdwagtp

The architecture of these platforms militates against their sustained pedagogical use. It might be that, as Pausé and Russell put it, “Social media enables scholarship to be publicised more widely within the  academy,  and  in addition to that, it enables  scholarship to become part of broader  social conversations”. The problem is that the incentives of these platforms have over time proved to be generative of a dialogical toxicity which tends to be obscured by the high-minded rhetoric of public engagement. The promise that social media might “bridge the gap between ‘town and gown’” is proving to be rather misleading. A large part of my new project will be exploring the implications of this at the level of the institutional politics of the university, with a particular focus on what it means for academic labour.

The role of social media for academics discourse in obscuring these issues, mystifying the complex politics of social media in the university through breathless reiteration of the individual benefits to be accrued through engagement, means it will be a central object of critique for the project. But I want to avoid slipping into utopian/dystopian, pro/anti framings of social media for academics. I still believe in its scholarly importance and it’s capacity to inculcate solidarity and (in limited ways) flatten hierarchies. There’s a great example of the latter in this paper by Pausé and Russell which I’m otherwise pretty critical of:

Accessibility means individuals who are not academically trained are able to  learn  about  a  field  of  research  and  contribute  to  it,  bringing  their  own  ideas  and  experiences  to  the  table.†    And  accountability  has  enabled  greater  criticism  of  the  process  of  scholarship  and  research.    Through  connecting  on  social  media,  marginalised  people  have  been  able  to  gather  sufficient  force  to  challenge  the  conventions  of  research;  to  insist  on  an  intersectional  perspective.    The  lived  experience  of  a  Māori  woman  living  in  Aotearoa  New  Zealand  can  challenge  the  theorised understanding of an academic.‡ People have objected to being studied, and  have demanded the right to participate in framing the discussion.  For example, the  Health  at  Every  Size®  (HAES)  movement  has  largely  been  led  by  advocates  from  within  what  is  known as  the  Fatosphere  (Harding,  2007),  prompting  research  that  questions the basic assumptions made about the relationship between body size and  health by health scholars and those working in the health field. This both challenges  and enriches scholars’ research.  There is now a rich empirical literature on the efficacy  of HAES (Burgard, 2014).

 

In the last few months, I’ve been thinking a lot about online harassment. Even writing that sentence, I come face-to-face with my own privilege, as ‘online harassment’ is something I’m able to elect to think about rather than an unavoidable feature of my use of the internet. But the evidence is clear that online harassment is ubiquitous. A 2014 Pew Study found that 73% of adult internet users have seen someone be harassed in some way online and 40% have personally experienced it:

  • 60% of internet users said they had witnessed someone being called offensive names
  • 53% had seen efforts to purposefully embarrass someone
  • 25% had seen someone being physically threatened
  • 24% witnessed someone being harassed for a sustained period of time
  • 19% said they witnessed someone being sexually harassed
  • 18% said they had seen someone be stalked

http://www.pewinternet.org/2014/10/22/online-harassment/

The witnessing figure from this US study interests me because it suggests that most internet users must be aware of the reality of online harassment, even if they seek to explain it away to whatever extent. The study makes a distinction between two categories of online harassment:

In Pew Research Center’s first survey devoted to the subject, two distinct but overlapping categories of online harassment occur to internet users. The first set of experiences is somewhat less severe: it includes name-calling and embarrassment. It is a layer of annoyance so common that those who see or experience it say they often ignore it.

The second category of harassment targets a smaller segment of the online public, but involves more severe experiences such as being the target of physical threats, harassment over a sustained period of time, stalking, and sexual harassment.

http://www.pewinternet.org/2014/10/22/online-harassment/

The Pew study found the distribution of these experiences to be structured by gender and age. Young adults (18-29) as a whole are more likely to experience either category of harassment but young women (18-24) are overwhelmingly the targets of the more extreme behaviours:

pi_2014-10-22__online-harassment-02

But the gendering of these experiences, shouldn’t lead us to dismiss the ‘lesser’ category of harassment. This too can be gendered, in its cumulative and ubiquitous character, as Audrey Watters conveys on loc 1771 of her Monsters of Educational Technology:

I speak from experience. On Twitter, I have over 26,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men –strangers, typically, but not always –jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain online harassment to me. Men explain blogging to me. Men explain, they explain, they explain. It’s exhausting. It’s insidious. It doesn’t quite elevate to the level of harassment, to be sure; but these microaggressions often mean that when harassment or threats do occur, women like me are already worn down. Yet this is all part of my experiences online. My experiences. Women’s experiences. My friends’ experiences.

There’s a complexity to this behaviour which the Pew study doesn’t capture. It recognises that these behaviours are much more common on such platforms than others, but it doesn’t attempt to look at these divergences demographically in terms of use of platforms. It’s important to recognise that certain online environments are what we might think of as ‘harrasogenic’ in a way that’s susceptible to explanation, including though not limited to deliberate choices by the companies in question to calibrate the architecture of their platforms to maximise attention, engagement, return etc in line with commercial incentives:

  • 66% of internet users who have experienced online harassment said their most recent incident occurred on a social networking site or app
  • 22% mentioned the comments section of a website
  • 16% said online gaming
  • 16% said in a personal email account
  • 10% mentioned a discussion site such as reddit
  • 6% said on an online dating website or apphttp://www.pewinternet.org/2014/10/22/online-harassment/

The mechanisms available within a platform to respond to harassment are a clear function of those choices, as well as shaping the character of the platform through parameterization of harassment and responses to it. The Pew study found that low-level harassment tended to lead to single-step responses and high-level harassment tended to lead to multi-step responses. Surprisingly, 75% of those who responded thought their decision made the situation better, though it raises an obvious question of the distribution of this experience between the two categories.

  • 47% of those who responded to their most recent incident with online harassment confronted the person online
  • 44% unfriended or blocked the person responsible
  • 22% reported the person responsible to the website or online service
  • 18% discussed the problem online to draw support for themselves
  • 13% changed their username or deleted their profile
  • 10% withdrew from an online forum
  • 8% stopped attending certain offline events or places
  • 5% reported the problem to law enforcementhttp://www.pewinternet.org/2014/10/22/online-harassment/

When seen against this background, the drive within universities to incite academics to engage online can seem rather problematic. As Tressie McMillan Cottom puts it “the risks and rewards of presenting oneself “to others over the Web using tools typically associated with celebrity promotion” (Barone 2009) are not the same for all academics in the neo-liberal “public” square of private media.” The increasing levels of political polarisation, as well as the specific problem of organised conservative and alt-right groups seeking to highlight what they deem to be problematic academic speech online, reveal how this issue is intensifying. Given, as Tressie observes, universities use “engaged academics as an empirical measure of a university’s reputational currency” online harassment must be seen as a central issue of academic freedom and academic labour.

We need to understand this issue in terms of broader structures of oppression, while also recognising the specific characteristics of digital environments that set the parameters of its online manifestations. From loc 1677 of The Monsters of Educational Technology by Audrey Watters:

Harassment – of women, people of color, and other marginalized groups – is pervasive online. It’s a reflection of offline harassment, to be sure. But there are mechanics of the Internet –its architecture, affordances, infrastructure, its culture –that can alter, even exacerbate what that harassment looks like and how it is experienced.

From loc 1843-1865 she takes apart some of the facile responses this issue can receive:

The answer can’t simply be to tell women to not use their real name online. If part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution. The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input is deleted. I’ve seen zero moderation, where marginalized voices are mobbed. The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether –or at least questioning the expectation that students host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

From Rethinking Social Exclusion, by Simon Winlow and Steve Hall, pg 73:

Political protests these days are taken not as an indication that something is going wrong and that a significant number of the population are dissatisfied with the nation’s political leadership. Rather, they seem to indicate that a healthy and vibrant democracy is in place, one that welcomes political contestation and vigorous public debate about government policy. ‘Look at the wonderful world liberalism has created!’, our politicians proclaim. ‘Political protests like this would never be tolerated in a non-democratic totalitarian regime!’ Of course, when the demonstration is complete, nothing has changed. The political protest ends up continuing only for a short time as an online blog or a Twitter post, offering nothing more than a cathartic opportunity to vent one’s spleen accompanied by the sad recognition that in all likelihood no one is listening, and no one really cares. It is also worth considering whether the peaceful protest now offers nothing more than an opportunity for the protestor to relinquish their subjective sense of duty to battle injustice. Once the protest is complete, and the world continues unchanged, the subject is allowed the comfort of having registered her dissatisfaction; whatever happens, it does so ‘not in my name’.

It’s this line I’ve put in bold which I’ve been thinking about a lot recently. How much of what is seen to be ‘trolling’ online represents a frustrated, even mutulated, impulse towards collective action?

In a way Winlow and Hall are too rosey in their framing here, positioning the pseudo-catharsis of social media as something that follows from the frustrations of contemporary public protest. What about when there is no prior collectivity, however frustrating and frustrated? What does the individualised rage we see seeking satisfaction through social media mean for the possibility of collectivity in the future? 

What about the experience of mediated collectivity: how does a symbolic sense of ‘us’, others like oneself seeking outlets for ‘our’ rage, leave what might otherwise possibly become a solidaristic impulse locked into this destructive register?

They make a similar point with other co-authors in Riots and Political Protest. From pg 164:

If the remaining logic is simply that the protest enables pissed-off individuals to cathartically release their pent-up frustration and momentarily draw strength from being around others who feel the same way, before returning to their lives to again be subject to the same objective causes of their frustration, then we can begin to see the limitations that have been imposed upon democratic political protest.

Collaborative Work and Social Media: Responding to the rapid spread of provocative content

Special issue call for the Journal of Computer Supported Cooperative work 

New extended deadline for submissions March 28th 2016!

These call details are also available online at https://sites.google.com/site/digitalwildfireesrc/jcscw

Special issue editors

Marina Jirotka – University of Oxford

William Housley – Cardiff University

Rob Procter- University of Warwick

Helena Webb – University of Oxford

The rapid spread of provocative content on social media

Social media platforms such as Facebook, Twitter, Instagram, Tumblr etc. create new spaces in which users can ‘meet’, interact and exchange content. This content is often then forwarded on through shares, retweets etc. so that single posts can be seen and responded to by multiple others. Our modern status of hyperconnectivity means that social media content can spread spontaneously across the globe in a very short period of time. As social media platforms have gained in popularity over the last ten years they have become particularly associated with the posting and rapid spread of provocative content. This provocative content may take various forms including: extreme or inflammatory sentiment – in particular relating to politics, class, race, sexuality or gender; inflammatory or incendiary comments directed towards particular individuals or groups; commentary and speculation regarding unfolding social events and crises; political debates and campaigns; and humanitarian and solidarity messages and campaigns.

Provocative content on social media can undoubtedly take constructive forms – for instance in the galvanising of a clean-up effort after the London riots of 2011 and the spread of solidarity messages such as #JeSuisCharlie #BlackLivesMatter. However much public and political attention has focused on the prevalence and rapid spread of more negative content such as hate speech, harassment and unverified rumour. Concerns have been expressed over the harms this content can cause to individuals, groups and populations, and questions have been raised over how social media can and should be regulated. Provocative content at times overlaps with illegal content and in some countries may be actioned under civil or criminal codes in relation to defamation, hate crime etc. However many kinds of provocative content do not breach any legal codes. Similarly, most social media platforms adopt terms of use that enable them to remove certain kinds of offensive or harmful material where necessary but allow users to post provocative content of various kinds. Indeed, many of these platforms are run on principles supporting freedom of expression and actively uphold the right for posts and posters to be provoking and/or inflammatory.

 

A 2013 report by the World Economic Forum (WEF) characterises contemporary concerns over the spread of provocative content across social media. The report highlights the global risk factor of ‘digital wildfires’: scenarios in which provocative content involving multiple users spreads widely and rapidly across social media and results in serious negative offline consequences. For example the massive spread of misinformation can harm the reputation of individuals, organisations and markets before there is a chance to correct it. Alternatively, abusive and incendiary messages can cause considerable harm to individuals and generate social tension during critical events such as civil disturbances, health scares and natural disasters. The WEF report argues that the risks posed by digital wildfires prompt re-evaluation of the current governance of digital social spaces. This may take the form of new legal or technological restrictions on content balanced against the principle of freedom of speech. Alternatively, there is scope to promote a ‘global digital ethos’ by encouraging social media users to behave responsibly in the monitoring and regulation of their own and others’ online behaviours.

 

CSCW and social media

Work conducted within the framework of computer supported cooperative work offers to greatly advance understanding of the rapid spread of provocative content on social media. In the first instance, CSCW perspectives can shed light on the role of collaboration in the occurrence of these socio-digital phenomena. Relevant research questions include: how does the rapid spread of different kinds of provocative content occur in real time? How do social media users, who may not even know each other, collude and collaborate to enable to spread this content? How might these users collaborate to adopt responsible online behaviours and what forms would these behaviours take? How can the online and offline consequences of provocative online content be identified, assessed, replicated and understood? What roles can different forms of regulation play in the management of the spread of provocative content and the pursuit of a global digital ethos?

 

In addition, CSCW approaches also support collaborative interdisciplinary research on the prevalence of provocative content on social media. For instance, opportunities exist in: collaborative and crowd source coding for phenomena such as the spread of rumour or hate speech; the co-design of digital tools or platforms capable of scoping and analysing the propagation of provocative content; and collaborative and interdisciplinary approaches to ethics and responsible innovation with regard to social media.

 

Journal special issue

The special issue takes up these research opportunities by exploring the relationships between collaborative work and social media in the context of the rapid spread of provocative content. For this special issue we welcome contributions including, but not limited to:

 

  • Case studies or comparative analyses of social media phenomena involving the rapid spread of provocative content.
  • Papers that report on the ‘collaborative work’ of digital agents in propagating, responding to and regulating the content and diffusion of provocative content on social media. This may include: the design and use of digital tools/platforms to scope and assess online content; crowd-source coding for provocative content; or qualitative examinations of the lived orderliness and practical accomplishment of social media interactions.
  • Examinations of governance practices relating to the regulation of digital social spaces. For example, work based ethnographic studies of social media regulation and response to the spread of provocative content during critical events such as disasters, civil emergencies or public health scares.
  • Papers that report on collaborative and interdisciplinary approaches to ethics and responsible innovation for the governance of digital social spaces.
  • Papers that report on the opportunities and challenges presented by the collaborative interdisciplinary study of the spread and prevalence of provocative content on social media.

We seek contributions from a range of disciplinary backgrounds that can advance theoretical, practical, technical and/or social understandings in CSCW and related fields.

Submission details and further information

Please submit your paper by March 7th 2016. Round 1 notifications will be sent in mid-June 2016.

Manuscripts must follow the journal’s format standards Instructions for Authors (pdf, 366 kB).

Submit manuscripts to http://www.editorialmanager.com/cosu/ and indicate in your cover letter that the submission is for the special issue.

For questions and queries please contact coordinating guest editor helena.webb@cs.ox.ac.uk

The special issue forms part of the ongoing research project ‘Digital Wildfire: (mis)information flows, propagation and responsible governance’. To find out more about the project, visit our website www.digitalwildfire.org or contact helena.webb@cs.ox.ac.uk .

The social ontology of trolling paper I’ve been pondering recently probably wouldn’t work for this but I plan to attend nonetheless:

Scold’s to Trolls; Social and Legal Responses to Visible and Audible Women

A one-day symposium: September 15th 2015

Organised by the Centre for Law and Society at Lancaster University Law School

Keynote Speaker: Professor Feona Attwood, Professor of Cultural Studies, Communication and Media, Middlesex University, UK

Theme

Underlying the trolling of visible and audible women is the deeply entrenched misogynistic idea of silencing women. Trolling is arguably just the latest methodology used to keep women silenced. The process of silencing women has been on-going for centuries. In the middle ages, women were silenced by various methods one of which was the scolds bridle; a cast iron cage fitted over a woman’s head and which included a metal plate with spikes on that was inserted into her mouth. The intention and the effect were not only to silence that particular woman, but also to have a disciplinary effect on other women. The trolling of women such as Emma Watson; Mary Beard; Caroline Criado-Perez and Stella Creasy, raise questions about whether the trolling of audible and visible women is a modern equivalent of the scolds bridle. When looking at the effects these mechanisms produce, it is difficult to see the difference between the 15th century and the 21st century. Whilst men can indeed be trolled, the significant difference in their experience is that they are not trolled because of their sex or gender.  The silencing of women and issues related to women straddles all areas of life from bank notes; video games and the high street (e.g. River Island’s ‘Anti Nag Gag’); or politics (e.g. Michael Fabricant’s tweet that he would like to ‘throat-punch’ a female journalist).

Submission are welcomed from a broad range of disciplines including law, criminology, media, sociology, cultural studies, history, social sciences, economics, psychology, linguistics and gender studies; from academics and non-academics whose work is relevant to the symposium theme, or which is of a multidisciplinary or interdisciplinary nature.

ABSTRACT SUBMISSION

Please submit an abstract of max. 300 words and 5-7 keywords (indicating the main research area in particular), and a short biographical note (approx. 2-3 lines) to s.beresford@lancaster.ac.uk or m.butler1@lancaster.ac.uk or s.weare@lancaster.ac.uk. Please include in your subject line ‘Abstract submission’.

The deadline for submitting abstracts is 1st May 2015.  A draft programme will be announced as soon as possible after the abstract submission deadline (and no later than 19th May 2015), together with registration details.