Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Juri Lotman’s semiotics as a theoretical resource for making sense of the cultural ontology of generative AI

Thanks to Michele Martini for introducing me to Juri Lotman earlier today. Currently listening to this lecture by Boris Uspenskij about Lotman’s works. There’s a lot here to explore with implications for how we make sense of generative AI and the shifting cultural machinery in which it is bound up:

Some random thought fragments:

  • Uspenskij asks how we can be sure that our interlocutors used signs in the same way we do. Conversational agents clearly don’t use signs in the same way we do, yet meaningful exchange with them is possible, at least once you move past the initial awkwardness. Are the parameters of understanding expanded by this experience?
  • What implications does the lack of spatial integration with conversational agents have on the forms of dialogue possible with them? Do they facilitate temporal integration of the sort talked about by Uspenskij here?