Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Why generative AI guidance for students needs to be embedded in departments

I just read the Russell Group AI principles for the first time since they were released and was struck by principle number 2: “Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience“. This is exactly what I’ve been blogging about recently as the point where the sector is struggling to adapt to the diffusion of LLMs which has already happened within the student community. As the guidance itself acknowledges what it means to use LLMs “effectively and appropriately in their learning experience” will vary between disciplines:

The appropriate uses of generative AI tools are likely to differ between academic disciplines and will be informed by policies and guidance from subject associations, therefore universities will encourage academic departments to apply institution-wide policies within their own context. Universities will also be encouraged to consider how these tools might be applied appropriately for different student groups or those with specific learning needs.

Unfortunately this places a great burden on subject associations at a point where many of them are still grappling with the financial difficulties generated by the pandemic, with declining membership rates, increasing costs and at least some event income having been knocked out temporarily. It also assumes that subject associations would have capacity to do this beyond resources. It might be possible for associations with dynamic leadership and a strong base of academic members working on these issues. But even that it’s asking a lot and most do not have this baseline level of resource. Where they do engage it’s likely to be subject to institutional isomorphism, replicating the assumptions of other groups because no one is clear about what this all means yet and is worried about being seen to misstep.

Subject associations were never going to be able to provide this guidance with sufficient depth and contextual sensitivity. This seems so obvious to me that it’s hard not to read the Russell Group principles as an (unconscious?) passing of responsibility for a difficult task to an external agent. Because the final statement under principle two illustrates what is needed in order to address this:

Engagement and dialogue between academic staff and students will be important to establish a shared understanding of the appropriate use of generative AI tools. Ensuring this dialogue is regular and ongoing will be vital given the pace at which generative AI is evolving.

I see no possible way around this. This dialogue has to take place, be embedded in existing processes and involve safe spaces in which staff and students feel able to talk frankly about their perceptions. It has to be informed by university policy but not subordinated to it. It has to continue for as long as the landscape of Generative AI is changing. It has to be lightweight enough to get buy in from a sufficient number of staff when workloads are spiralling amidst a general sense of crisis. It has to be robust enough to actually have some hope of generating norms and standards concerning what “effective and appropriate” use of LLMs means in their context.

The Russell Group principles describe the problem as if it’s the solution. This is not a straightforward undertaking, as evidenced perhaps by the lack of evidence that it’s taking place in the sector. Saying ‘dialogue is important’ necessitates that we think about what that infrastructure for dialogue can and should look like. In practice there’s a range of questions this addresses:

What’s actually happening on the ground?

What are students in our discipline using AI for? Which specific tools at what points in their work? How does this differ from what we imagine is happening?

What makes our discipline what it is?

Which capabilities and ways of thinking are foundational to what we do? What has to remain human for this to still be our field? Where might AI genuinely enhance rather than undermine these capabilities?

When does support become substitution?

At what point does AI use shift from supporting learning to bypassing it? How do we recognize genuine engagement versus its simulation? What’s the difference between scaffolding and outsourcing?

How do we assess in an AI-saturated world?

What forms of assessment still tell us something meaningful? How do we evaluate understanding when outputs can be generated? What new approaches might we need to develop?

Who gets left behind?

Which students have access to what tools? How does the wealth gap manifest in AI capability? What would meaningful support look like?

What’s the disconnect with professional practice?

How is AI actually used in our field outside universities? What happens when we prohibit tools that are standard in the workplace? How do we prepare students for reality?

How do we build collective capacity?

What do staff need to feel less anxious about this? What helps students use AI thoughtfully rather than desperately? How do we learn from what’s working and what isn’t?