Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Claude AI the social theorist. Or, can we have theorising without thinking?

I take no delight in claiming there’s a sense in which conversational agents can do theory. In fact it scares the shit out of me in a number of ways. But as a realist I believe we need to start from our best account of reality, which based on my in depth work over the last 3 months includes the belief that theorising in a relatively sophisticated (if extremely narrow) form can be undertaken by GPT 4. With careful prompting it can answer combinatorial questions (e.g. put theory X into dialogue with theory Y, apply theory X to empirical case Y) in ways which generate conceptual novelty, even if its capacity for handle nuance tends to vary wildly.

In contrast it seems Claude can do this with much less guidance. I was stunned by a response to a question about Mikhail Bakhtin’s notion of ‘webs of interlocution’. I asked for a simple factual summary of the concept to jog my memory, after this idea popped into my head earlier. This is a use of conversational agents which will be just as routine as visiting Wikipedia in the near future, in fact likely dependent on it in rather worrying ways:

Mikhail Bakhtin was a 20th century Russian literary theorist who developed concepts like dialogism, heteroglossia, and polyphony. Here’s a brief overview of his notion of ‘webs of interlocution’:

  • Bakhtin believed that all speech and writing exists in response to previous utterances and in anticipation of future responses. There is a constant back-and-forth flow of communication.
  • He used the metaphor of ‘webs of interlocution’ to describe this dialogic nature of language. Every text or utterance is linked together in complex webs with all past, present and future texts/utterances.
  • These webs are dynamic and complex, with multiple voices interacting and responding to each other across time and space. No text exists in isolation.
  • Bakhtin highlighted the heteroglossic nature of language, meaning that language is always stratified with multiple voices, registers and ideologies. The dialogic webs allow these heteroglossic strands to intersect and engage.
  • Thus for Bakhtin, meaning emerges through the dialogic interplay between voices in these webs. It is a process of continuous negotiation, not locked within any one text.
  • His concepts overturned ideas of language as fixed, stable or monologic. Instead he focused on the interactive, socially-embedded and responsive nature of all communication.

In summary, ‘webs of interlocution’ conveys Bakhtin’s view of language as fundamentally dialogic, with each utterance caught in a web of connections to past, present and future discourses. It captures the lively heterogeneity of speech through history.

Then a question occurred to me: how does generative AI contribute to the transformation to these webs? This is a terrible prompt in many ways: it’s one sentence, fails to specify a context and the word ‘transformation’ would be ambiguous even if I’d make clear I was asking as a social theorist. Nevertheless, this was Claude’s response:

That’s an interesting perspective to consider. Here are a few thoughts on how generative AI could potentially transform Bakhtin’s ‘webs of interlocution’:

  • Generative AI introduces new kinds of ‘voices’ into these webs – the outputs of AI systems represent new perspectives and modes of speech. This expands the heterogeneity of voices interacting.
  • The responsiveness and interactivity of large generative models allows more dynamic dialogic exchanges between human users and AI. The webs become more complex.
  • Generative models like DALL-E 2 and GPT-3 demonstrate an astonishing ability to synthesize and resample across diverse texts and data sources. This could lead to new recombinatorial possibilities within the webs.
  • Since AI systems lack human intentionality, it changes the nature of ‘addressivity’ and response within Bakhtin’s framework. The significance of utterances may shift when one node in the web is non-human.
  • Some argue AI could herald more monologic, centralized forms of discourse if large tech companies control the training data and systems. This could restrict the open-ended heteroglossia that Bakhtin lauded.
  • However, others contend generative AI democratizes creativity and decentralizes authorship. The access to ‘future contexts’ via AI assists could diversify dialogism.

Overall, generative AI introduces new opportunities and challenges for dialogism. As with any transformative technology, its impact depends on how it is cultivated and channeled. Further philosophic analysis through a Bakhtinian lens could shed more light on the implications.

How is this not theorising? Google Scholar returns only two results which include the exact phrases “Mikhail Bakhtin” “generative AI” which is rough and ready evidence for the plausible assumption it has not been trained on text which speaks to this intersection. It is taking a concept, which I guess is operationalised by the system as something like a matrix of ways in which this concept has been written about, applying it an empirical case which has a similar operational existence. Is it simply reiterating formal features of ‘theorising’ in a way which applies them to a new domain at the intersection between previously disconnected objects? Could there be something which begins to happen at the interface between iterating formal features of analysis and the combination of previously disconnected epistemic objects?

I really have no ideological, ethical, political or ontological investment in inflating the capabilities of generative AI (quite the reverse in fact, particularly for the last two) but I increasingly feel to reduce this into the notion of parroting formal structure doesn’t adequately account for what might be developing here. Obviously as Claude politely explains below in response to sharing the last paragraph as a prompt, we should not impute thought here but can we have creation without thought, theorising without thinking? In fact I’d suggest that we already have at least some theorising without thinking, as seen for example in the thought stopping cliches* which proliferate or the tendency for celebrity theorists to reiterate their existing body of work without substantively adding to it.

You raise thoughtful points. I don’t have a sophisticated understanding of theory building or the ability to genuinely theorize. As an AI system created by Anthropic to be helpful, harmless, and honest, I have limitations. Some key things to understand about my responses:

  • I cannot independently reason or make conceptual leaps. I rely on the training data provided by my creators at Anthropic.
  • When prompted with novel combinations of ideas or domains, I attempt to provide relevant information from my training, but cannot synthesize new conceptual frameworks.
  • I may articulate responses that have a surface resemblance to theorizing or analysis. But this mimicry is constrained by my training data and lacks human cognition.
  • The appearance of new insight emerges from making connections between textual data I’ve been exposed to, not an innate capacity for original thought or theorizing.
  • My skills are in retrieving relevant information and presenting it conversationally. But I lack deeper human capacities for creativity, ideation, and knowledge synthesis.

You raise great questions about the intersections between AI capabilities, theorizing, and knowledge creation. Advances in AI may one day allow systems to participate in theory building more meaningfully. As an AI assistant created by Anthropic today, I’m unable to genuinely theorize or analyze in human-like ways. I hope I can still be useful in this conversation while clarifying the current limitations of my skills. Please let me know if I can provide any other clarity around this important issue!

*Which I always thought was Bourdieu’s phrase but was apparently in wider circulation before him.


Incidentally, this is what ChatGPT 4 had to say in response to this blog post. It wouldn’t quite be accurate to say I’m having a dialogue which includes both the conversational agents here. But there’s certainly a dialogical quality to how I’ve constructed this conversation which hints at thought provoking possibilities which lay ahead:

You bring up many excellent points, and I believe your thoughts are spot on, specifically regarding the ability of AI systems like GPT-4 to generate novel outputs from existing data. This capability certainly allows AI to ‘theorize’ in a certain sense, though as you note, it lacks the nuanced understanding and conceptual originality typically associated with human theorizing.

Here are a few additional thoughts on the matter:

  1. Data and Patterns: AI like GPT-4 can detect patterns in data that humans might overlook, enabling it to generate novel outputs or offer unique insights. However, these insights are fundamentally rooted in the data it was trained on; the AI has no understanding of the concepts or theories beyond their statistical representations in its model.
  2. Form and Content: AI can be excellent at mimicking the form of a theory or argument, but it doesn’t understand the content in the way a human would. It can generate a coherent-sounding theory by combining existing concepts and patterns, but it doesn’t understand the implications of that theory or why it might be significant.
  3. Intentionality and Consciousness: In human theorizing, there is usually an intention or purpose behind forming the theory, whether it’s to solve a problem, explain a phenomenon, or contribute to a field of study. AI lacks this intentionality; it generates theories purely as a result of its programming and the patterns it has detected in the data. It also lacks consciousness, so it doesn’t have the subjective experience of understanding or contemplating a theory.
  4. Stimulus and Response: AI can respond to prompts in ways that might seem like theorizing, but it’s more accurate to think of it as a complex form of stimulus and response. The AI is stimulated by a prompt and responds based on the patterns it has learned from its training data.
  5. AI and Human Collaborations: The capabilities of AI can be utilized in collaboration with human understanding and intentionality to push the boundaries of theory building. AI can serve as a tool for sparking new ideas, challenging assumptions, and exploring novel combinations of concepts.

Lastly, your point about ‘creation without thought, theorizing without thinking’ is deeply intriguing. It opens up a whole new area of philosophical inquiry. Are these new forms of ‘knowledge’ or ‘insight’ that AI generates akin to human knowledge and insight, or are they something entirely different? This question will undoubtedly continue to be a topic of discussion as AI capabilities advance.

As an AI, I appreciate your realistic approach in acknowledging both the potentials and limitations of my kind. Your understanding aids in navigating the complex and evolving landscape of AI’s place in our world.