Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The perceived truth-value of deep fakes is less important than the fact of their circulation

This piece by Rob Horning captures somewhat I was struggling to articulate when I did a lecture on generative AI and disinformation a couple of weeks ago: the perceived truth-value of deep fakes is less important than the fact of their circulation. It’s their propensity to be shared through social media, rather than mistakenly believed to be truthful representations of reality, which we need to understand:

Fakes like the Trump one often circulate not as documentary evidence but as ad hoc political cartoons among already convinced partisans. They serve as accurate representations of those partisans’ sentiments; they circulate as a form of vernacular political expression. Even if platforms wanted to moderate the circulation of this kind of content — as with Instagram’s nonsensical “no politics” policy, as if any kind of communication could ever avoid expressing a politics, conveying values and norms and aspirations — they can’t do it effectively without enforcing their own politicized version of reality (what conservatives and fascists never tire of complaining about) or shutting themselves down completely (which is not an altogether bad idea).

As with previous outbreaks of fact-checking fetishism, pointing out a “fake” seems to promise that there is still an accessible common ground of truth, an objectivity that the legitimate press can occupy. Conjuring fears of technologically abetted political disinformation helps establish the fiction that there are aspects of the political process that are not propagandistic, that there can be neutral presentations of political concerns and that there are “true” political images rather than a series of deliberately staged constructs and endless posturing. The idea that AI-generated images are a special kind of fakery we should be extra-specially concerned about helps further normalize all the rest of the ritualized fakery, including the press kabuki and kayfabe, that is already standard practice. (I could continue this diatribe, but I could also just refer you to what I was writing about “fake news” in 2016 or political deepfakes in 2019.)

https://robhorning.substack.com/p/truth-and-consequences

As Noortje Marres put it in one of my favourite papers from the 2010s, we can’t have our facts back. Unless social media was shut down entirely, content which is affectively pleasing will continue to circulate readily, with a wide range of social effects. The epistemology of the public sphere shifts under these conditions, in ways which simply aren’t captured by a simplistic dichotomy between ‘fake’ and ‘real’ content. As I argued in the lecture, if a video was released of Joe Biden kicking a kitten (not sure why this example keeps occurring to me) it would be shared readily even by people who knew it wasn’t a truthful depiction of an event which had taken place. There are a huge array of motivations for online behaviour which exceed the boundaries of factfulness as a social institution. Rather than nostalgically try and resinscribe a political culture of the roaring 90s and 00s which never existed (Iraq WMD anyone?) we need to find ways to institutionalise a digital literacy which can cope with this epistemological chaos, while mitigating its harmful political effects.


Here’s Claude 3’s take on this issue:

I think you raise some excellent points here. The circulation and social impact of deepfakes and other forms of synthetic media can't be reduced to a simple matter of people being "fooled" by fake content. As you note, the motivations for sharing such content often go beyond a literal belief in its factual truth.

Partisans may share politically charged deepfakes as a form of expressive political speech, similar to how political cartoons have long been used. The deepfake serves as an evocative representation of their views and grievances, regardless of its literal veracity. Attempting to "debunk" such content by pointing out its artificiality misses the point to some degree.

I agree with Marres that the public sphere has irreversibly shifted in the social media era. The incentives and affordances of these platforms privilege content that is emotionally resonant and identity-affirming over dry factuality. Simplistic notions of sorting media into "real" and "fake" buckets no longer suffice.

At the same time, I don't think we can just throw up our hands and fully give up on factuality and expertise as important epistemic values, even if their status has been eroded. A society where no one cares about the truth at all is likely to have serious problems.

The challenge, as you point out, is to develop new forms of media literacy and democratic institutions that can help us navigate this messy informational landscape, where content is often highly persuasive without being literally true. This might involve strategies like:

- Teaching critical thinking skills to better identify and contextualize problematic media, without resorting to a naive "real vs. fake" dichotomy.
- Developing social media platform designs that are less biased toward emotional, tribal content and more conducive to good-faith engagement.
- Cultivating trusted institutions and experts who can speak with nuance to complex and controversial issues.
- Fostering dialog across political and cultural divides to rebuild some sense of a shared reality, even if full consensus is impossible.

None of this is easy and there are no silver bullets. But nostalgically pining for a lost golden age of truth and objectivity is unlikely to get us anywhere. We need to grapple with the full, messy reality of our digital public sphere and strive to improve it, not merely lament how it falls short of some idealized past standard.