Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What will generative AI mean for digital methods?

It occurred to me when watching this talk yesterday that what Tristan Harris describes as the breakdown of content verification has significant implications for digital methods. As Aza Raskin puts it in the video, “you do not know who you are talking to via audio or video”. The same is true for digital artefacts encountered online: you do not know that it depicts who it says it depicts or that it was created by who it says created it. This has significant epistemological implications because it introduces an unresolvable uncertainty into the implied relationship between online behaviour and digital artefacts, breaking the chain through which we can infer beyond digital content to states of affairs in the world. Obviously this is more of a problem for representational methodologies than non-representational ones but even for, say, speculative methods there’s an implied link between an intervention and the outcomes which is now subject to the same doubt. Does the epistemology of digital methods need to be reconstructed on this basis? How can it be reconstructed? The fact that digital methods could themselves be automated by generative AI (particularly on a speculative basis) further complicates the picture.