Since the launch of OpenAI’s ChatGPT in November 2022 engagement with generative AI in higher education has understandably been preoccupied by the challenge it poses for assessment integrity. Immediate calls for outright bans and returns to in person assessment have thankfully given way to a more nuanced debate involving assessment reform and incorporating AI literacy into the curriculum, in order to prepare students for a working world where these are likely to be familiar tools. There is growing awareness of the harms entailed by wrongly accusing students of malpractice on the basis of unreliable AI detectors, particularly with regards to their tendency to flag non-native English speakers. It is likely we will move towards contextualised, holistic and practice-based assessment becoming the norm, building on existing trends in this direction which predated generative AI. While there might be a new wave of surveillance pedagogy built around the expectation of student self-recording in order to demonstrate the provenance of their work, it feels like we are moving into a new phase of steering these developments in ways consistent with educational standards rather than simply trying to hold them at bay. This opens up space in which the sector can begin to grapple with the many other challenges which generative AI poses for universities, beyond student practice and assessment integrity.
My biggest concern is that financial pressures could lead universities to embrace automation in the hope of reducing staffing levels. What is widely recognised as a “broken funding system” coupled with the impact of inflation and high interest rates could easily give rise to the hope that new technologies make it possible to replace staff with automated systems. The problem is that while certain routine tasks can be automated, they still require human engagement and oversight for quality control. The outputs of AI systems are only as good as what is put into them: “garbage in, garbage out” as the saying goes. This is as much a matter of human expertise as it is prompt engineering; the understanding of what a task entails, how it slots into the context and the tacit knowledge on which all organisations depend. In universities which will continue to change in a context of social upheaval, this expertise will be an ongoing necessity rather than being articulated once and then dispensed with. Furthermore, it is the only real bulwark against the tendency of these systems to ‘hallucinate’ (produce statistically plausible but factually incorrect statements) which will be a continual feature of even more advanced models to come. If automation leads to less in-house expertise, it will undermine the very conditions required for successful automation. Human expertise is needed to realise the value and mitigate the risks of generative AI within universities.
Getting the balance right on AI governance poses another difficulty. It was reported recently that there have been over 150,000 ChatGPT connections from campus at Glasgow University since May. While many of these are undoubtedly students, we can assume a significant number of staff are also amongst them. There is no reason to imagine that Glasgow is atypical in this respect. How many staff are currently using personal accounts with generative AI systems for tasks which are wholly or entirely part of their normal work? At present OpenAI’s policies are surprisingly innocuous in that ownership of both input and output are retained by the user, albeit with significant caveats attached. Though it remains to be seen whether this holds true once the firm is under pressure to demonstrate profitability. The ease with which these services can be accessed through personal devices means it will be hard to reliably prohibit their use, raising the question of how to regulate it in a way that balances the possibility for innovation with the governance responsibilities for universities. Enterprise systems like Microsoft’s Copilot offer solutions to these problems, but they might in turn generate new ones including further locking universities into an existing ecosystem.
There are sustainability challenges posed by the intensive energy and water costs involved in training and operating generative AI systems. The graphics process units (GPU) chips on which these models run require 10-15 times the energy of a traditional CPU with significant water use in cooling the servers. This has been described as the ‘dirty secret’ of the current hype wave with what Timnit Gebru has called the “corporate pissing contest” of ever larger models leading to spiralling ecological impacts. If we move towards generative AI being a routine feature of knowledge work, in the process inevitably losing the moniker ‘AI’, there will be a set of computationally intensive operations inserted into a range of processes which previously worked effectively. It will be an intensification of digitalisation’s environmental impact at precisely the point where we should be heading rapidly in the other direction. It might be possible to mitigate these dangers through sustainable procurement policies, such as using services which operate green centres or restricting the use of large models to cases where they add value. But a switch towards enterprise systems could hinder this room for manoeuvre, illustrating how solving one problem can make another more intractable.
To frame generative AI in these terms risks accusations of what Dan McQuillan has called ‘AI realism’: asserting there is no alternative to developments which can and should be questioned. It is not clear to me though how the implicit rejection of generative AI aspires to reach beyond the level of discursive critique, at least in the political context in which we find ourselves. In contrast there is an immediate opportunity to steer the uptake of these technologies within universities in ways which support rather than hinder the flourishing of students and staff. The choices made over the coming months will influence the direction of travel in the sector for years to come. It is important critical voices are heard in those discussions in a way that speaks to the practical decisions being made about generative AI in the strange organisations which are universities.
