In recent months I’ve been prone to arguing GAI-proofing is a tactical rather than strategic problem. It’s difficult to do this as a logistical and intellectual exercise, at least if you want to preserve the ILOs of units which were built around vulnerable assessment, but it’s not a challenging undertaking in strategic terms. We need to get rid of essays to the greatest extent possible, scaffolding them in new ways when we can’t abandon them without distorting the pedagogical intention of a unit, ideally to be replaced by forms of assessment which are processual in their scope and/or built around making things (ideally together) and reflecting on the process of having made them. The problem with essays is that they fix assessment at a single point of textual production which is by its nature substitutable. They’re vulnerable to GAI for exactly the same reason they were vulnerable to essay mills.
I could read this piece from WonkHE as either suggesting I’m exaggerating how GAI-proof these alternatives are or there’s a risk of a shallow understand of why/how they are GAI-proof:
A cottage industry is rapidly developing, promising to translate traditional assessments into supposedly generative-AI-proof formats. While freely available AI adaptive toolkits offer some superficial solutions, they greatly underestimate the power of current LLMs. For instance, replacing an essay with a task that asks for an AI-generated answer and a reflection on the AI’s usefulness naively assumes the AI can’t also generate the reflection.
Even that supposed bastion of plagiarism-proof assessment, the multiple-choice questionnaire, has been hacked. Websites like Virtual Professor offer a shockingly simple workaround: students merely hover over online test questions, and the correct answer magically appears. This underscores a key challenge — generative AI can significantly enhance students’ learning and assessment, but only if we move away from oversimplified, traditional modes and adopt more thoughtful and engaging frameworks.
https://wonkhe.com/blogs/the-continued-success-of-universities-hinges-on-the-response-to-the-generative-ai-reckoning/
I was pleased to see that part of their recommendation is something we’ve done on our programme for years. What we’ve now done is build this mapping exercise into pedagogy for other units, which is interesting to reflect on because we’re currently in the process of straining against the modularisation imposed on us in order to provide a more integrated learning journey for students:
This could mean learners documenting their “technology map” as part of their assessment — a map shared with teachers and potentially with peers. LLMs can also be used to assess students’ level of understanding and engagement, providing aggregated data at both programme and module level.
How do we build a framework for “expertly guiding interactions between learner, teacher, and LLM”? Even if their rhetoric puts me off at points, I agree with them this is the question and it has implications for the viability of the sector in the longer term, at least outside of those institutions that confer elite status on their students through their participation. We should already have been building pedagogical activity around “formative assessment and discovery-led learning”, prior to it being a practical necessity of coping with GAI.
