I’m still a bit surprised to find myself taking positions like this. But this is an extract from Generative AI for Academics which I suspect might divide opinion, yet which I entirely stand by:
In contrast talking to a conversational agent can feel strange. Instructing it is a much more comfortable practice, treating it as a tool which you just happen to be able use with the power of natural language. Or imagining that you’re engaged in a weirdly quotidian form of computer programming, defining operations through the power of your words with the necessity of learning a more or less arcane programming language first. In contrast actually talking to it, framing it as having the capacity to meaningfully parse and respond to your intellectual observations, can feel extremely odd at first. The intellectual awareness that it doesn’t really have this capacity, at least not in anything like the sense the friend sitting next to you at a conference does, certainly doesn’t help in this respect. But I think it would misleading to imagine that intellectual objection wholly explains the reason why this feels weird. There’s a cultural block which gets in the way, an unwillingness to dignify the existence of these newfound machinic capacities by engaging with them in a way which recognises the extent of their intellectual ability; with undercurrents of what the late social theorist Margaret Archer (2021) described as ‘robophobia’. But if you get past that block then there are some remarkable conversations to be had, as well as many utterly mundane ones. It can’t replicate the experience of talking to that friend at the conference but it can supplement it enriching and mutually supportive ways, providing you with an interlocutor to further parse your conference discussion when you are travelling home that evening.
