This is excellent from Terry Hanley about the “semantic gray zone” of ‘AI therapy’. There’s a lack of clarity about what we’re actually talking about here, which gets in the way of exploring the real underlying issues with this developing site of practice:
If therapy is understood in a looser, everyday sense – as something that helps – then AI clearly has a place. Millions are already using these tools in precisely that way, often in moments where other forms of support are unavailable. Ignoring that would be difficult, and perhaps disingenuous. Waiting lists are long, services are stretched, access is uneven. People are not waiting for the conceptual debate to settle – they are already using what is available to make sense of experience, reduce distress, or simply feel less alone. The question is not whether this is happening, but how we understand what is happening when it does.
https://counselling.substack.com/p/does-ai-therapy-actually-exist?utm_source=post-email-title&publication_id=869300&post_id=192314887&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email
I’m particularly interested in how we avoid defining ‘AI therapy’ as a discrete user-led practice which is demarcated from occasionally asking ChatGPT for advice (etc). Given user-model interaction takes place over time, we need ways of conceptualising that interaction in a way that can account for changes in users and models over time. In other words, how do mundane and/or occasional practices of treating LLMs as an interlocutor (or as Archer would put it, enrolling them in communicative reflexivity) develop into more intensive modes of reliance and under what conditions?
