moderating social media and the challenge of normativity

This interesting article (HT Nick Couldry) explores the challenge faced by Facebook in imposing standards on a user base distributed around the globe:

As Facebook has tentacled out from Palo Alto, Calif., gaining control of an ever-larger slice of the global commons, the network has found itself in a tenuous and culturally awkward position: how to determine a single standard of what is and is not acceptable — and apply it uniformly, from Maui to Morocco.

For Facebook and other platforms like it, incidents such as the bullfighting kerfuffle betray a larger, existential difficulty: How can you possibly impose a single moral framework on a vast and varying patchwork of global communities?

If you ask Facebook this question, the social-media behemoth will deny doing any such thing. Facebook says its community standards are inert, universal, agnostic to place and time. The site doesn’t advance any worldview, it claims, besides the non-controversial opinion that people should “connect” online.

https://www.washingtonpost.com/news/the-intersect/wp/2016/01/28/the-big-myth-facebook-needs-everyone-to-believe/

Their ‘global community standards’ are the mechanism through which the digital activity of over one and a half billion users is policed. But these regulations have an uncertain grounding in the normative judgements of the user base: the aggregate of users are far too heterogeneous (to say the least) to facilitate any layer of moral intuition which can reliably buttress the legitimacy of the global community standards. This problem is amplified by two factors:

Facebook has modified its standards several times in response to pressure from advocacy groups — although the site has deliberately obscured those edits, and the process by which Facebook determines its guidelines remains stubbornly obtuse. On top of that, at least some of the low-level contract workers who enforce Facebook’s rules are embedded in the region — or at least the time zone — whose content they moderate. The social network staffs its moderation team in 24 languages, 24 hours a day.

https://www.washingtonpost.com/news/the-intersect/wp/2016/01/28/the-big-myth-facebook-needs-everyone-to-believe/

Having moderators embedded in a region might help on occasion. But this would assume the normativity of the region is any less fragmented and, as the Centre for Social Ontology’s recent book explores, we cannot assume this to be true. What’s more likely is that this vast army of poorly paid moderators will exercise little to no autonomy over their tasks, with the Facebook standards nonetheless being inflected through their variable judgement i.e. they won’t try and deviate from the global standards but they inevitably will do, in an unpredictable way, as any individual evaluator necessarily does when imposing a rule in particular cases.

So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor. Hemanshu Nigam, the former chief security officer of MySpace who now runs online safety consultancy SSP Blue, estimates that the number of content moderators scrubbing the world’s social media sites, mobile apps, and cloud storage services runs to “well over 100,000”—that is, about twice the total head count of Google and nearly 14 times that of Facebook.

This work is increasingly done in the Philippines. A former US colony, the Philippines has maintained close cultural ties to the United States, which content moderation companies say helps Filipinos determine what Americans find offensive. And moderators in the Philippines can be hired for a fraction of American wages. Ryan Cardeno, a former contractor for Microsoft in the Philippines, told me that he made $500 per month by the end of his three-and-a-half-year tenure with outsourcing firm Sykes. Last year, Cardeno was offered $312 per month by another firm to moderate content for Facebook, paltry even by industry standards

https://www.washingtonpost.com/news/the-intersect/wp/2016/01/28/the-big-myth-facebook-needs-everyone-to-believe/

Is there any accountability here? It’s certainly possible to influence the global community standards but, as the article notes, this influence is profoundly opaque. Meanwhile, there are good reasons to think that challenge and adjudication simply couldn’t work at this scale. How would it operate? Given it seems content moderators might compromise as much as half the workforce of social media sites, it’s worth thinking about how labour intensive a potential appeals process would be. Why go to that trouble when you can err on the side of simply taking down something on the grounds that someone thinks it’s offensive? Without finding some way to solve the normativity problem described earlier, how to underwrite legitimacy within an aggregate characterised by low social integration, there’s also no obvious ethical counter balance to this organizational tendency.