ALW2: 2nd Workshop on Abusive Language Online
EMNLP 2018 (Brussels, Belgium), October 31st or November 1st, 2018
Submission deadline: July 20th, 2018
Website: https://sites.google.com/view/alw2018 <https://sites.google.com/view/alw2018>
Submission link: https://www.softconf.com/emnlp2018/ALW2/ <https://www.softconf.com/emnlp2018/ALW2/>

Overview
Interaction amongst users on social networking platforms can enable constructive and insightful conversations and civic participation; however, on many sites that encourage user interaction, verbal abuse has become commonplace, leading to negative outcomes such as cyberbullying, hate speech, and scapegoating. In online contexts, aggressive behavior may be more frequent than in face-to-face interaction, which can poison the social climates within online communities. The last few years have seen a surge in such abusive online behavior, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with the EU commission code of conduct and face fines of up to €50M in Germany if they systematically fail to remove abusive content within 24 hours. While governance demands the ability to respond quickly and at scale, we do not yet have effective human or technical processes that can address this need. Abusive language can often be extremely subtle and highly context dependent. Thus we are challenged to develop scalable computational methods that can reliably and efficiently detect and mitigate the use of abusive language online within variable and evolving contexts.

As a field that works directly with computational analysis of language, NLP (Natural Language Processing) is in a unique position to address this problem. Recently there have been a greater number of papers dealing with abusive language in the computational linguistics community. Abusive language is not a stable or simple target: misclassification of regular conversation as abusive can severely impact users’ freedom of expression and reputation, while misclassification of abusive conversations as unproblematic on the other hand maintains the status quo of online communities as unsafe environments. Clearly, there is still a great deal of work to be done in this area. More practically, as research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation, where research in media studies, rhetorical analysis, and cultural analysis can offer many insights.

In this second edition of this workshop, we continue to emphasize the computational detection of abusive language as informed by interdisciplinary scholarship and community experience. We invite paper submissions describing unpublished work from relevant fields including, but not limited to: natural language processing, law, psychology, network analysis, gender and women’s studies, and critical race theory.

Paper Topics
We invite long and short papers on any of the following general topics:
related to developing computational models and systems:

NLP models and methods for detecting abusive language online, including, but not limited to hate speech, cyberbullying etc.
Application of NLP tools to analyze social media content and other large data sets
NLP models for cross-lingual abusive language detection
Computational models for multi-modal abuse detection
Development of corpora and annotation guidelines
Critical algorithm studies with a focus on abusive language moderation technology

Human-Computer Interaction for abusive language detection systems
Best practices for using NLP techniques in watchdog settings

or related to legal, social, and policy considerations of abusive language online:

The social and personal consequences of being the target of abusive language and targeting others with abusive language
Assessment of current non-NLP methods of addressing abusive language
Legal ramifications of measures taken against abusive language use
Social implications of monitoring and moderating unacceptable content
Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

In addition, in this one-day workshop, we will have a multidisciplinary panel discussion and a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection. We are also looking into the possibility of publishing a special issue journal to this iteration of the workshop.

We seek to have a greater focus on policy aspects of online abuse through invited speakers and panels.

Submission Information
We will be using the EMNLP 2018 Submission Guidelines. Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references.

Accepted papers will be given an additional page of content to address reviewer comments.  We also invite papers which describe systems. If you would like to present a demo in addition to presenting the paper, please make sure to select either “full paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.
We will be using the START conference system to manage submissions.

Important Dates
Submission due: July 20, 2018
Author Notification: August 18, 2018
Camera Ready: August 31, 2018
Workshop Date: Oct 31st or Nov 1st, 2018
Submission link: https://www.softconf.com/emnlp2018/ALW2/ <https://www.softconf.com/emnlp2018/ALW2/>

Call for Papers – Edited Collection

Online Othering: Exploring the Dark Side of the Web

Editors: Dr Karen Lumsden (Loughborough University) and Dr Emily Harmer (University of Liverpool)

The Internet plays a vital role in many aspects of our social, political and cultural lives and in the early days of its expansion there was much enthusiasm for its potentially transformative role in providing a space for individuals to construct their identities, communicate with others and share ideas and concerns. A perhaps unanticipated consequence of these developments has been the extent to which some individuals and groups have used this freedom to engage in hateful or discriminatory communicative practices online in these loosely regulated spaces, often hiding behind the cloak of anonymity. For instance, women on Twitter and in the public eye have found themselves subject to online harassment, sexism and trolling, while the aftermath of the Brexit vote saw in a rise in reports of hate speech including racism, Islamophobia, and anti-Semitism, in both online and offline contexts.

This edited collection explores the othering and discrimination propagated and encountered by individuals online and in social media contexts and cultures. It problematizes and analyses the dichotomy presented between real and virtual worlds (and spaces) by exploring the motivations behind certain offending and othering behaviours, and the impact this has on the targets of online abuse and hate speech. This includes the extent to which online othering constitutes a new phenomenon and how the motivations for committing forms of cyber-abuse, cyber-hate, and othering relate to the expression of these attitudes and behaviours in the offline context.

It explores the extent to which forms of information and communication technologies facilitate, exacerbate, and/or promote, the enactment of traditional offline offences (such as domestic abuse and stalking). Finally, the collection addresses the role of the police and other agencies in terms of their interventions, and the regulation and governance of virtual space(s).

The edited collection is an output from a one-day conference on Online Othering hosted at Loughborough University. We are seeking additional contributions to the volume from scholars and researchers working in disciplines such as sociology, communication and media studies, criminology, political studies and/or gender studies.

Contributions should address the ways in which various groups and identities are subjected to othering in online environments. This can include news websites, social media platforms (i.e. Twitter, Facebook, Instagram, YouTube, etc.), blogs, and forums. We are also interested in contributions which explore othering across multiple contexts. Potential topics can include, but are not limited to:

  • Trolling and gendered online abuse/harassment;
  • Cyber-bullying or cyber-stalking;
  • Hate crime/speech online;
  • Homophobia and/or transphobia;
  • Online representations of disability;
  • Class bigotry;
  • Racism, Islamophobia, or anti-Semitism;
  • Sexting and/or revenge pornography;
  • Brexit, Trumpism and the rise of the ‘alt-right’.

The edited collection proposal is to be submitted to Palgrave as part of their Cyber-Crime series by Autumn 2017. For accepted submissions, the finalised chapters will need to be received by the end of September 2018.

Submissions:

Interested contributors should email a title, abstract (250 words) and biography (100 words) to both Karen Lumsden K.Lumsden@lboro.ac.uk and Emily Harmer E.Harmer@liverpool.ac.uk by 31 August 2017. Authors will be informed of decisions by 30 September 2017.

In the last few months, I’ve been thinking a lot about online harassment. Even writing that sentence, I come face-to-face with my own privilege, as ‘online harassment’ is something I’m able to elect to think about rather than an unavoidable feature of my use of the internet. But the evidence is clear that online harassment is ubiquitous. A 2014 Pew Study found that 73% of adult internet users have seen someone be harassed in some way online and 40% have personally experienced it:

  • 60% of internet users said they had witnessed someone being called offensive names
  • 53% had seen efforts to purposefully embarrass someone
  • 25% had seen someone being physically threatened
  • 24% witnessed someone being harassed for a sustained period of time
  • 19% said they witnessed someone being sexually harassed
  • 18% said they had seen someone be stalked

http://www.pewinternet.org/2014/10/22/online-harassment/

The witnessing figure from this US study interests me because it suggests that most internet users must be aware of the reality of online harassment, even if they seek to explain it away to whatever extent. The study makes a distinction between two categories of online harassment:

In Pew Research Center’s first survey devoted to the subject, two distinct but overlapping categories of online harassment occur to internet users. The first set of experiences is somewhat less severe: it includes name-calling and embarrassment. It is a layer of annoyance so common that those who see or experience it say they often ignore it.

The second category of harassment targets a smaller segment of the online public, but involves more severe experiences such as being the target of physical threats, harassment over a sustained period of time, stalking, and sexual harassment.

http://www.pewinternet.org/2014/10/22/online-harassment/

The Pew study found the distribution of these experiences to be structured by gender and age. Young adults (18-29) as a whole are more likely to experience either category of harassment but young women (18-24) are overwhelmingly the targets of the more extreme behaviours:

pi_2014-10-22__online-harassment-02

But the gendering of these experiences, shouldn’t lead us to dismiss the ‘lesser’ category of harassment. This too can be gendered, in its cumulative and ubiquitous character, as Audrey Watters conveys on loc 1771 of her Monsters of Educational Technology:

I speak from experience. On Twitter, I have over 26,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men –strangers, typically, but not always –jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain online harassment to me. Men explain blogging to me. Men explain, they explain, they explain. It’s exhausting. It’s insidious. It doesn’t quite elevate to the level of harassment, to be sure; but these microaggressions often mean that when harassment or threats do occur, women like me are already worn down. Yet this is all part of my experiences online. My experiences. Women’s experiences. My friends’ experiences.

There’s a complexity to this behaviour which the Pew study doesn’t capture. It recognises that these behaviours are much more common on such platforms than others, but it doesn’t attempt to look at these divergences demographically in terms of use of platforms. It’s important to recognise that certain online environments are what we might think of as ‘harrasogenic’ in a way that’s susceptible to explanation, including though not limited to deliberate choices by the companies in question to calibrate the architecture of their platforms to maximise attention, engagement, return etc in line with commercial incentives:

  • 66% of internet users who have experienced online harassment said their most recent incident occurred on a social networking site or app
  • 22% mentioned the comments section of a website
  • 16% said online gaming
  • 16% said in a personal email account
  • 10% mentioned a discussion site such as reddit
  • 6% said on an online dating website or apphttp://www.pewinternet.org/2014/10/22/online-harassment/

The mechanisms available within a platform to respond to harassment are a clear function of those choices, as well as shaping the character of the platform through parameterization of harassment and responses to it. The Pew study found that low-level harassment tended to lead to single-step responses and high-level harassment tended to lead to multi-step responses. Surprisingly, 75% of those who responded thought their decision made the situation better, though it raises an obvious question of the distribution of this experience between the two categories.

  • 47% of those who responded to their most recent incident with online harassment confronted the person online
  • 44% unfriended or blocked the person responsible
  • 22% reported the person responsible to the website or online service
  • 18% discussed the problem online to draw support for themselves
  • 13% changed their username or deleted their profile
  • 10% withdrew from an online forum
  • 8% stopped attending certain offline events or places
  • 5% reported the problem to law enforcementhttp://www.pewinternet.org/2014/10/22/online-harassment/

When seen against this background, the drive within universities to incite academics to engage online can seem rather problematic. As Tressie McMillan Cottom puts it “the risks and rewards of presenting oneself “to others over the Web using tools typically associated with celebrity promotion” (Barone 2009) are not the same for all academics in the neo-liberal “public” square of private media.” The increasing levels of political polarisation, as well as the specific problem of organised conservative and alt-right groups seeking to highlight what they deem to be problematic academic speech online, reveal how this issue is intensifying. Given, as Tressie observes, universities use “engaged academics as an empirical measure of a university’s reputational currency” online harassment must be seen as a central issue of academic freedom and academic labour.

We need to understand this issue in terms of broader structures of oppression, while also recognising the specific characteristics of digital environments that set the parameters of its online manifestations. From loc 1677 of The Monsters of Educational Technology by Audrey Watters:

Harassment – of women, people of color, and other marginalized groups – is pervasive online. It’s a reflection of offline harassment, to be sure. But there are mechanics of the Internet –its architecture, affordances, infrastructure, its culture –that can alter, even exacerbate what that harassment looks like and how it is experienced.

From loc 1843-1865 she takes apart some of the facile responses this issue can receive:

The answer can’t simply be to tell women to not use their real name online. If part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution. The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input is deleted. I’ve seen zero moderation, where marginalized voices are mobbed. The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether –or at least questioning the expectation that students host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”