I must have used the phrase ‘cognitive outsourcing’ at least one hundred times this week. It’s a ready-to-hand phrase which conveys the risk that use of LLMs leads academics and students to rely on the machine to do their thinking, rather than doing it themselves. It points to one of the most immediate problems for universities related to LLMs (in chatbot form and particularly in software like Copilot 365) which we all need to take seriously. The problem is that it’s not a good concept:
- It implies thinking is a narrowly cognitive process in which it’s simply about the quantity of thought taking place. This misses the affective dimensions of thinking, in which our thoughts matter to us. It misses the unconscious dimensions of thinking, in which creative insight often works around cognitive contents rather than through them.
- It implies the relationship between cognitive insourcing (?) and cognitive outsourcing is a linear one, such that the more you’re using the machine the less you’re thinking yourself. I’ve got 100k words with Milan Sturmer explaining at great length which this is a nonsense ontology, coming later this year.
- It misses the relational dynamics of ‘cognitive outsourcing’. There are judgements of care (or its absence) made in what you choose to outsource or not. On some level you are fundamentally saying “I don’t give a shit” when you pass over a responsibility for a task orientated towards other people to a machine. In some cases this might be justified. In other cases it might genuinely be “I don’t give a shit about X but I really care about Y and outsourcing X gives me more energy to work on Y” though we need to be careful with those judgements institutionally.
I’m not sure what the replacement is but I think we need one. The problem is it might not immediately convey the risks in the way that ‘cognitive outsourcing’ does.
