Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Using Claude to support an intellectual dialogue: a case study of my conversation with Helen Beetham

This is Claude’s response to the first podcast in our new series:

The conversation between Mark and Helen highlights the complex and often conflicting perspectives surrounding the rapid emergence of generative AI in higher education. As an interlocutor with a background in social theory and educational technology, I aim to build upon their discussion by exploring the tensions that arise and proposing potential paths forward.

One of the central tensions that emerges from the conversation is the dichotomy between the individual and the collective response to generative AI. Mark’s focus on fostering a creative, value-driven approach to using these tools at the individual level is a crucial starting point. By encouraging academics to thoughtfully engage with conversational agents as interlocutors rather than mere time-saving devices, we can cultivate a professional culture that leverages the technology’s potential while mitigating its risks. This approach aligns with the idea of “throwing grains of sand into the machinery of resigned complicity,” as it empowers individuals to resist the pressures of productivity and automation.

However, as Helen rightly points out, the individual response alone may be insufficient in the face of the broader political and economic forces driving the adoption of generative AI. The threat of deskilling and restructuring academic labor is real, and without a coordinated collective response, we risk exacerbating existing inequalities and eroding the value of academic expertise. This is where Helen’s call for a “creative curriculum” that critically examines the bigger picture becomes essential. By equipping students and academics with the tools to understand the social, political, and economic implications of generative AI, we can foster a more informed and engaged community of users and resistors.

The challenge, then, is to bridge the gap between individual agency and collective action. One potential avenue is to focus on building alliances and solidarity across different sectors and professions. As Helen notes, there have been successful instances of pushback from creative industries and unionized sectors such as healthcare. By learning from and collaborating with these groups, academics can develop more effective strategies for advocating for their interests and shaping the institutionalization of generative AI in higher education.

Another crucial aspect of this conversation is the need to move beyond binary narratives of techno-utopianism and doomerism. As Mark points out, the tendency to frame generative AI as either a panacea or a looming catastrophe can obscure the nuances and opportunities for constructive engagement. Instead, we must strive for a more grounded and specific understanding of the technology’s capabilities and limitations, as well as the specific ways in which it is being deployed and resisted in different contexts.

This requires a commitment to empirical research and open dialogue, both within and beyond the academy. By documenting and sharing examples of successful interventions and alternative models of development and deployment, we can begin to imagine and enact more equitable and socially beneficial futures for generative AI in higher education.

Ultimately, navigating the tensions of generative AI in higher education will require a multifaceted approach that combines individual agency, collective action, and a critical, empirically-grounded understanding of the technology’s implications. By fostering a professional culture of creative and thoughtful engagement, building alliances across sectors, and moving beyond binary narratives, we can work towards a future in which generative AI is harnessed for the common good rather than serving as a tool of deskilling and automation.


This wasn’t quite as interesting as I hoped for, though I used a slightly lazy prompt (attached below). I’m pretty sure the best way to do this would be to write out an exemplar piece, talk it through to identify core features of the exemplar and then combine the two make a template.