I thought the way Gloria Mark frames this question was very important. This is the question around which I’ve structured Generative AI for Academics, even if I failed to articulate so concisely:
Large Language Models (LLMs) (like ChatGPT, Google Gemini, and dozens of others) are tools at our fingertips for doing everyday knowledge work—like searching for information and summarizing and writing text. Current experiences though with LLMs highlight how using them is not so straightforward as we had imagined (see my recent Substack about an experiment creating 600 summaries). In addition to problems with inaccuracy we haven’t yet figured out the social infrastructure in which to use them. Are there any guiding thoughts to help us use them more effectively in knowledge work?
https://gloriamark.substack.com/p/can-human-insight-and-ai-computation
In essence I’ve argued that academics need to learn to undertake articulation work, as its called within Computer-Supported Cooperative Work (CSCW), summarised by Mark here. I’d argue there is an epistemic bias which gets in the way of academics undertaking this work; though I don’t make this argument in the book, I think it’s the best explanation of why otherwise competent people struggle with the relatively straightforward matter of interacting with the systems:
Articulation work refers to the additional work that needs to be done to make a collaboration work. In successful human-human collaboration, people have to spend time upfront defining and coordinating the task, and figuring out roles and responsibilities. Articulation work is ongoing, to make sure that collaborating partners remain on the same page.
https://gloriamark.substack.com/p/can-human-insight-and-ai-computation
What Mark describes at the end is exactly what I hope my book contributes to in higher education. I feel like we urgently need to have this conversation because malpractice is proliferating in domains like publishing and reviewing:
Conventions are needed within groups and organizations to agree on the types of tasks for which AI use is acceptable; how to address accountability when errors are made, and how the produced text should be integrated with other work.
https://gloriamark.substack.com/p/can-human-insight-and-ai-computation
