Tagged: platform Toggle Comment Threads | Keyboard Shortcuts

  • Mark 10:47 am on April 19, 2019 Permalink | Reply
    Tags: fact checking, factfulness, , , , platform, , , ,   

    You can’t have your ‘facts’ back 

    My notes on Marres, N. (2018). Why We Can’t Have Our Facts Back. Engaging Science, Technology, and Society, 4, 423-443.

    “We want our facts back” is a semi-joking remarking Noortje Marres overheard an academic say which captures a wider response to what has been called ‘post-truth’. Many feel increasingly inclined to take a normative stance in support of ‘facts’ and feel nostalgic for “a time when experts still seemed to have unquestionable authority and felt secure in this authority, when government ministers didn’t say things like ‘people have had enough of experts,’ and the notion that evidence should play a central role in public debate and politics had widespread, even taken-for- granted, institutional support” (423-424). Appealing though it might be, Marres points out that this position ignores the fact that not only were partisans of evidence were a minority in public life in the 90s and 00s, it was also widely recognised that evidence-based debate was not in itself as solution to political problems and could even be problematic by putting politics at risk through an over reliance on experts. While recognising the growing indifference of public speech to factfulness and the lack of consequences attached to outright lies, Marres argues we need to look more deeply to the “changing architectures of the public sphere” (424). The many initiatives which seek to restore the place of factfulness within public life (disinformation awareness campaigns, knowledge literacy programme, fact-checking services) risk reinstating an outdated strategy for securing facts in public debate which is based on authority. It entails a divide between knowing and unknowing subjects, those with facts and those without, which runs contrary to any aspiration for a knowledge democracy. Achieving this will require institutional, media and technological arrangements which are very different to those from the much claimed golden age of factfulness.

    Social media has become a battleground for these debates, with fact checking initiatives using techniques ranging from ‘human moderation’ through to automated fact verification in order to apply journalistic procedures to online content. The platforms themselves have invested increasingly in moderation teams, as well as using automated tools to seek to demarcate problematic and unproblematic material. This has led inter alia to ‘dispute contented’ banners which can now be attached to certain pieces of content on Facebook, highlighting that a third party fact checking operation has cast doubt upon it. There have been questioned range about the working conditions of those undertaking this epistemic labour in click farms, but less scrutiny of the epistemology and methodologies underpinning them. The rely for their legitimacy on ideals of public knowledge and scientific citizenship but operate on a basis which is in tension with these, assuming that “quality is an attribute of information itself” (426). This runs contrary to what had become an increasingly dominant sense of information as *social*, defined by its circulation and connections. In contrast now what is at stake is seen to be the properties of content itself: “What is said to be in need of attention and intervention is the “veracity” of online statements and the potential duplicity of online sources” (427). For instance Factmata seeks to “cross-reference any claim circulating online onto a database of so-called verified statements, in order to validate or invalidate it” (427). So for instance a claim about immigration would immediately be linked to public data about the issue, allowing users to ‘become their own fact checkers’. In this it embodied logical positivism, seeking to decompose statements into units which could be matched against experience or other verifiable statements. Marres makes a particularly interesting point here about how logical positive and computer science shared a common inspiration in Frege’s logic and similar work, going some way to explaining the tendency for positivism to be reinstated by the turn to AI in systems like Factmata.

    Fact checking systems implement a methodology and perform a service, but they also carry a distinction: “that between legitimate and illegitimate claims to knowledge” (428). These putatively technical procedures in fact draw normative boundaries, ones which its important we understand. She references Rorty’s account of demarcationism: defining validity or its absence as a binary attribute of atomistic statements i.e. can be they be traced back to observational statements or not? The normative dimension comes from the question of how to police this boundary between different types of statements. It also entails a sense of actors as being responsible for the epistemic quality of debate, by drawing attention to the character of their statements. In this world view, ‘good’ sources reliably produce valid statements, with ‘good’ users capable of discerning their presence. This is what Marres calls the politics of demarcation. This seeks ‘fake news’ as something which emerges from outside the technology: “it is the type of information sources that the Internet makes available, on one hand, and the users’ lack of skills capable of discerning the difference between valid and invalid statements, one the other, that are said to be responsible for the prevalence of dodgy content in this media environment” (428). Fact vs fiction pages were part of internet culture in the 1990s and demarcationist technologies predate the rise of ‘fake news’. But whereas the blame was once attributed to deviant online subcultures such as vaxers or flat-earthers, it’s now increasingly marked in social terms such as education levels. This dichotomy of responsible and irresponsible users roughly maps onto a broader “opposition between educated progressives and, on balance, less educated supporters of populist and nationalist causes” which is at the heart of contemporary debates about ‘fake news’ i.e. it has the potential in practice to position nascent ‘populists’ as the epistemic crisis, who need to beaten back by and suppressed through technological means in order to ensure the health of the public sphere. They might even reinforce the distinction in a way that furthers the political project of the latter, as can be seen in the far-right backlash against social media firms ‘deplatforming’ leading figures.

    Demarcationism can’t account for the role that digital media has played in undermining respect for knowledge in the first place, instead externalising it into the figure of deviant users and deviant content producers. The mechanism undermining this is simple, as algorithms for content selection are designed to ensure maximum circulation in order to build the widest possible audience. This account of this on 431 was excellent:

    “Online platforms, then, reward messages that spread instantly and widely with even more visibility, and, as tabloid newspapers invested in maximizing advertising revenue also found out in previous decades, sensational rather than factual content turns out to satisfy this criterion of maximal “share-ability” best. A commercial logic here gives rise to a circular content economy, one without referent: content that gets shared a lot is rewarded with more visibility, thereby increasing its share-ability.”

    Fact checking services address the bias of sources while veiling the role of this content economy in conditioning the behaviour of those sources. They render opaque the role played by “technologies of source selection that regulate content circulation online” (431). The network structure of online communities is another source of limitation, as groups spreading ‘fake news’ barely overlap with groups interested in debunking it. How do we make sense of these differences between knowledge communities without invoking the facile distinction of literate and illiterate? Fact checking and demarcation do not help us understand the problem with knowledge we face in digitalised societies, instead actually actively keeping us from this. This concern doesn’t mean we deny there is a “crisis of public evidence in today’s digital societies” but rather that we recognise it “goes well beyond da disregard for facts in digital media environments” (433). It’s crucial that we recognise how “the insertion of computational technologies into public infrastructures have resulted in deception and manipulation of the empirical record” (434) by undermining institutional architectures which ensured accountability across social life. The correspondence model of truth embedded in fact checking is inadequate to address the broader social challenges which these developments are posing for us. Its reliance on looking back, checking claims against a corpus of established facts, fails to grasp today’s “dynamic information environments, in which readings and behaviors are constantly adjusted as conditions change” (434). Marres argues for a dynamic conception of truth in debate to replace this retrospective one.

    The behaviourism around which platforms have been designed uses a concept of users as “influenceable subjects, not knowledge agents”. It has facilitated a social science which does without interpretation, but this does not mean it is a knowledge free environment. It is, as Marres puts it, “a research-centric apparatus, in that their design directly reflects the epistemic needs of the data scientists whose analytic operations are key to their commercial model: to target information to groups of friends, to track shares and likes in the aggregate” (435). It is built around the influencibility of users, with an empirical register which is predicated upon this. This is the final problem which Marres raises with demarcationist fact checking: “the normative opposition between knowledge (good) and non-knowledge (bad) that it imposes makes it difficult to see that epistemic ideals––like behaviorism––themselves have played a role in encouraging a disregard for knowledge on the Internet” (437). Not least of all in the fundamental assymetry at its heart. From 437:

    “social media present an environment in two halves, where, on the one side, we find users with “influence-able” and “target-able” opinions, tastes, and preferences, while, on the other side, we have authoritative data analysts who “know” the population’s fine- grained and ever-changeable preferences and tastes. Scientists––the proponents of knowledge–– haven’t been by-standers but active participants in the crafting of a media architecture designed to enable the influencing of users’ actions.”

    Demarcationism reflects this bifurcation, with the knowing subjects seeking to redesign the information environment to correct the unknowing subjects. The “veritable army of social data scientists who monitor, measure, and seek to intervene in this behavioral theatre” do so on the basis of facts, but outside of the public sphere and in a way which precludes engagement between experts and citizens.

    Fake news might be problematic in itself but it attaches itself to issues which matter to people, tracking controversies which define political life. Fact checking fails to address this connection for the reasons cited above, but Marres argues that ‘experimental facts’ might be better served for this purpose. This doesn’t entail a rejection of stable facts, well establish ed and stable conditions which play an important role in public debate. If I understand correctly, these “statements whose veracity is unstable and whose epistemic status changes over time” (438) because they reference a changing reality, can be interrogated in real time in order to facilitate debate about their character and implications, as opposed to being demarcated in relation to an established body of fact. But I found the example of the £350 million on the NHS claim slightly confusing. There’s so much in this paper to think about, I’m going to come back to it at a lot. I think the point is that ‘experimental facts’ in this sense are more common given the epistemic dynamism which characterised digitalised society. So in essence the argument is to find ways to stay with the difficulties these cause, rather than trying to shut them down in ways likely to be be epistemically short-sighted and politically counter-productive. This is a move from a politics of demarcation to a politics of selection: “while demarcation concentrates on the retrospective establishment of correspondence of public statements with presumably stable, pre-given atomistic statements, a politics of selection progressively establishes a referent for claims through an iterative process of locating and evaluating statement-networks in formation.” (441).

  • Mark 10:09 am on March 22, 2019 Permalink | Reply
    Tags: , platform,   

    Ontological veiling    

    I’m very taken with Andrew Pickering’s concept of veiling. If I understand him correctly, it refers to how knowledge production can circumscribe reality by taking us on a detour from certain aspects of it. Those features which resist representation in our approach risk dropping off stage, unseen and unheard. He uses it to refer to how modern science veils the performative dimension of our being but I think it can also be used to make sense of how platformised knowledge production, reliant on what registers as a behavioural trace within a digital platform, leaves large swathes of social reality opaque. 

    • Sourav Roy 10:15 am on March 22, 2019 Permalink

      Discussions on this topic and related ones always reminds me of the visual metaphor of selectively joining the dots in the night sky to imagine meaningful shapes of constellations. How different cultures have joined the dots in the very same constellation very differently.

    • Mark 10:19 am on March 22, 2019 Permalink

      Something which we’re doing constantly, I think

  • Mark 7:24 pm on May 13, 2018 Permalink | Reply
    Tags: , , , platform, , , , , ,   

    What is platform literacy? 

    In the last couple of years, I’ve found myself returning repeatedly to the idea of platform literacy. By this I mean a capacity to understand how platforms shape the action which takes place through them, sometimes in observable and explicit ways but usually in unobservable and implicit ones. It concerns our own (inter)actions and how this context facilitates or frustrates them, as well as the unseen ways in which it subtly moulds them and the responses of others to them.

    This understanding seems increasingly crucial to me because the alternative might otherwise be a diffuse paranoia. As knowledge of data brokerage and data politics expands throughout society, it generates a certainty that we are being manipulated but an unknowability about precisely who is doing the manipulation, how they are doing it and what the effects might be. Platform literacy helps ground this in a concrete understanding of specific processes and their implications for our agency.

    Any recommendations for reading on this are much appreciated! Particularly those with a pedagogical focus. I’ll be working my way through the Digital Polarisation Intiative’s work and the Polarisation MOOC in the meantime.

    • X. Li 2:15 am on November 10, 2018 Permalink

      Hello Mark!
      Thanks for sharing. I’m teaching a “Cross-platform” class as a part of a Graphic Design BFA curriculum and have been thinking about the topic.
      I found the article “The politics of ‘platforms’ ” really informative. https://ecommons.cornell.edu/bitstream/handle/1813/12774/pop.pdf?sequence=1

      We might have a different focus in this but I’d definitely love to follow your thoughts on this.

  • Mark 10:29 am on March 10, 2016 Permalink
    Tags: , jose van dijck, platform, , , ,   

    The Politics of the Platform Society 

    This is a great talk by José van Dijck. I can’t wait for her new book:

    There are some excellent responses by Sonia Livingstone, suggesting we need to be critical of an emerging grand narrative of the platform society. It meshes nicely with the observation made by Adrian McKenzie that ‘algorithms’ have replaced ‘discourse’ as the master concept of post-structuralism.

  • Mark 6:50 am on November 4, 2015 Permalink
    Tags: , platform, ,   

    call for papers: IPP2016 “the platform society” (via @claudiakincaid) 

    Location: Thursday 22 – Friday 23 September 2016, University of Oxford.
    Convenors: Helen Margetts (OII), Vili Lehdonvirta (OII), Jonathan Bright (OII), David Sutcliffe (OII), Andrea Calderaro (EUI / ECPR).
    Abstract deadline: 14 March 2016.
    Contact: policyandinternet@oii.ox.ac.uk

    This conference is convened by the Oxford Internet Institute for the OII-edited academic journalPolicy and Internet, in collaboration with the European Consortium of Political Research (ECPR) standing group on Internet and Politics.


    Large scale internet platforms such as Google, Facebook, Amazon and Uber play an increasingly important role in contemporary society. These platforms facilitate connections between friends and family members, between politicians and voters, between governments and citizens, between consumers and producers, and between employers and employees. As such, they are becoming venues where large segments of contemporary life are played out.

    The data collected and in some cases made openly available by these platforms creates huge opportunities for advancing research in many fields of social science. Exciting advances have already been made in understanding, for example, how information spreads across networks and the importance of social influence on personal action. Yet researchers have only scratched the surface of the possibilities offered by new data sources and analysis methods.

    At the same time, the decisions made by these platforms increasingly shape contemporary life.Whether taking employment through Upwork, purchasing goods on Amazon, seeking information via Google, remitting money via PayPal, or debating politics on Twitter or joining a campaign on change.org our actions are enabled and constrained by sophisticated algorithms and company policies. Meanwhile, the concept of ‘government as a platform’ offers the potential to reshape the entire policy-making environment. The decisions, assumptions and interests reflected in these algorithms and platforms will have significant consequences for society at large, yet understanding of these processes is still very limited.


    The aim of this conference is to bring together scholars studying platforms, both in terms of interactions taking place on platforms and the data they generate, as well as the platforms themselves and how they are shaped and operated. We welcome theoretical as well as empirical, qualitative as well as quantitative studies, from all disciplines that can provide useful perspectives on the contemporary “platform society”. Topics of interest include but are not limited to:

    • Data driven studies of platform-mediated interactions (e.g. using APIs or scraped data)
    • The viability, opportunities and challenges of ‘government as a platform’
    • Studies of how algorithms and user interfaces shape interactions (e.g. STS, HCI)
    • Impacts of platforms in a given industry or government area (e.g. taxis, local gov)
    • Formal economic modeling of platform competition, strategy and policy
    • Political mobilization around platforms (e.g. Uber and AirBnB protests)
    • Open source and distributed platforms and their politics (e.g. Bitcoin, blockchain)
    • Innovation and entrepreneurship in platform marketplaces (e.g. App Store, Google Play)
    • Issues in research use of platforms (e.g. Mechanical Turk survey practices)
    • Conflicts between public policy and platform rules (e.g. Google in Europe)
    • Studies of how platform firms manage platforms (e.g. theoretical, ethnographic)
    • Civic activism and mobilization platforms such as We the People petitions platform, change.org or Avaaz
    • Comparative studies of platforms (e.g. rules of Twitter vs. Facebook)
    • Public policy development related to platforms (e.g. EU Digital Single Market)
    • Transnational issues in platforms and digital markets (e.g. TTIP, safe harbor)
    • The ethics of algorithms and responsible innovation

    Accepted papers will be organized into thematically and methodologically relevant sessions and parallel streams. 

    Proposal submission

    Paper proposals

    Paper proposals should consist of a title and a 1,000-word extended abstract that specifies and motivates the research question, describes the methods and data used, and summarises the main findings. Abstracts will be peer reviewed, and the authors of accepted proposals are expected to submit full papers prior to the conference. Applicants will have the opportunity to co-submit their paper to the journal Policy and Internet, which will operate a fast-track review process for papers accepted to the conference.

    Paper submissions can also be considered for a Best Paper Award (sponsored by the journal Policy and Internet). The prize will be awarded at the closing session of the conference. As the paper is intended to be published in a future issue of the journal, authors should indicate whether they would like their paper to be considered for the prize.

    SUBMIT YOUR ABSTRACT HERE (deadline: 14 March 2016)

    Poster proposals

    Posters should summarise in a visually engaging manner the purpose, methods and results of an original piece of research. All accepted submissions will be considered for a Best Poster Award. The prize will be awarded at the closing session of the conference.

    SUBMIT YOUR POSTER HERE (deadline: 14 March 2016)

    Important dates

    • Extended abstract submission deadline: 14 March 2016
    • Decisions on abstracts: 2 May 2016
    • Full paper / poster submission deadline (for accepted abstracts): 1 September 2016
    • Conference dates: Thursday 22 – Friday 23 September 2016
Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc