Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

ChatGPT as the Big Other: the imagined limitlessness of knowledge

If we imagine the speed and fluency of conversational agents continues to increase* from their present levels, what would this mean for how people habitually relate to a service like ChatGPT? One of the things which fascinate me about hallucination is the confidence with which these services tend to offer these statistically plausible but factually incorrect responses. These are being mitigated with time but they’re an inherent propensity of large language models which will never be removed. The ability to answer any question with confidence, even if those answers are factually incorrect, could become extremely psychically charged as part of a routine mode of relating; if the relationship becomes a taken for granted part of the lifeworld rather than a novelty being engaged with out of curiosity or to save time.

Learning to live with limitations involves recognising the limitations in knowledge, the constraints upon what we can do but what can be known more broadly. Digitalisation already pushed against our acceptance of those limits, inclining us towards a view of the world as fully legible with knowledge ‘out there’ waiting to be found. The continued development of GAI, particularly if it gets faster and more ubiquitous, will further push against these limits because there will be a speaking subject in our pocket who will act as if those limits are mostly or entirely absent. The new wave of datafication which GAI uptake within organisations, as well as incorporation into local devices through the lens of edge computing, will make the world somewhat more legible (in a limited sense) than it already was. But it will still be a fantasy and a dangerous one.

*The evidence suggests we are instead heading for a plateau, so this is a thought-experiment