Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Robophobia as a form of prejudice which stops academics working effectively within generative AI

I’m still a bit surprised to find myself taking positions like this. But this is an extract from Generative AI for Academics which I suspect might divide opinion, yet which I entirely stand by:

In contrast talking to a conversational agent can feel strange. Instructing it is a much more comfortable practice, treating it as a tool which you just happen to be able use with the power of natural language. Or imagining that you’re engaged in a weirdly quotidian form of computer programming, defining operations through the power of your words with the necessity of learning a more or less arcane programming language first. In contrast actually talking to it, framing it as having the capacity to meaningfully parse and respond to your intellectual observations, can feel extremely odd at first. The intellectual awareness that it doesn’t really have this capacity, at least not in anything like the sense the friend sitting next to you at a conference does, certainly doesn’t help in this respect. But I think it would misleading to imagine that intellectual objection wholly explains the reason why this feels weird. There’s a cultural block which gets in the way, an unwillingness to dignify the existence of these newfound machinic capacities by engaging with them in a way which recognises the extent of their intellectual ability; with undercurrents of what the late social theorist Margaret Archer (2021) described as ‘robophobia’. But if you get past that block then there are some remarkable conversations to be had, as well as many utterly mundane ones. It can’t replicate the experience of talking to that friend at the conference but it can supplement it enriching and mutually supportive ways, providing you with an interlocutor to further parse your conference discussion when you are travelling home that evening.