Mark Carrigan

accelerated academy acceleration agency Algorithmic Authoritarianism and Digital Repression Archive Archiving automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities distraction elites Fragile Movements and Their Politics Cultures generative AI higher education Interested internal conversation labour Lacan Listening margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms politics populism Post-Democracy, Depoliticisation and Technocracy post-truth public engagement public sociology publishing quantified self Reading realism reflexivity sexuality Shadow Mobilization, Astroturfing and Manipulation social change Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Sharing Economy The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

On AI therapy

It’s not been a surprise to discover that one of the things people do, in a unhappy and anomic society, when presented with conversational agents is to draw on them for emotional and psychological support. The fact that conversational AI has emerged after we’ve seen multiple years of a therapy-tech boom is a contingent feature of this environment. There’s a large market of consumers seeking therapeutic support through their smart phones, as well as a range of well-capitalised firms seeking to meet this demand. It’s obvious that generative AI will be a potent addition to this mix, offering new ways to enrich existing services, introducing new providers and potentially expanding the whole market.

Is this a problem? I can see two obvious reasons to think it is. Firstly, the already questionable privacy record of mental health apps. The data shared with these apps has the potential to be extremely valuable, particularly when linked, creating the risk that much more verbalised and extensive conversational data would incite surveillance capitalism models to fund these services. Perhaps expensive subscriptions and data privacy for the middle class, human therapists for the elite and surveillance capitalism for everyone else? Secondly, the propensity of LLMs towards hallucination means that guard rails are needed here, particularly when it comes to working with vulnerable people, as opposed to say working in a coaching modality which I think is going to be much safer.

These are pretty vast problems. But the inaccessibility of support is itself a pretty vast problem as well, which AI therapy has an obvious capacity to address, as outlined nicely in this Guardian essay:

AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years. Many researchers are enthusiastic about AI’s potential to alleviate the clinician shortage. “Disease prevalence and patient need massively outweigh the number of mental health professionals alive on the planet,” says Ross Harper, CEO of the AI-powered healthcare tool Limbic.

Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick. Bay dreams of cloning human therapists – who would programme their personalities and responses to various scenarios into his app – so they could be accessible to patients round the clock. (“Some therapists are open to this,” he says, “and others … are not.”) Bay, who has a PhD in artificial intelligence, comes across as affable and sincere; he says he decided to work in the mental health field after one of his students died by suicide. In developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

But imagine the transference that comes with the 24/7 therapist? The therapist who is always there, aways listening, always answering and who never brings their counter-transference into the interaction. I think the therapeutic alliance between chatbots and users is going to be an extremely complex issue. We urgently need longitudinal data about how these interactions create psychodynamics which might be extremely different to those found in existing clinical practice. This is extremely worrying:

Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

Who is best placed to study these dynamics? Practioner-researchers I think. I’m increasingly preoccupied by the idea of a GPT (or similar) which is parameterised by the practitioner on a weekly basis, informed by their own clinical judgement. It could then be reviewed prior to the next session, with a view to augmenting their own capabilities beyond the confines of the session. If the clinical judgement of practitioners is squeezed out from the developmental processes, I’m not convinced the guard rails will work or the engineering of these systems could be adequately informed by an empirical grasp of the emergent psychodynamics of therapeutic AI.

There’s a broader challenge here about augmentation versus replacement. Generative AI threatens the replacement of professionals because of the marketised incentives which prioritise efficiency savings, defined in extremely narrow terms. I’m increasingly convinced the response to this cannot simply be a rear guard action to fortify professional autonomy, because the capacities of GAI systems are simply too broad. There’s too much they can do, too well, for this to be a viable strategy. Instead we need to find ways to augment professional intervention, expand its scope and implications using the affordances of the technology.