Image Credit: ramhee.com
It is widely believed within psychotherapeutic circles that the tension that sometimes exists between therapist and client, a kind of exploratory friction, and even occasional ruptures in the therapeutic relationship, leading to repair, are central to client insight and growth. Experiments with AI platforms have shown that it is extremely difficult to provoke a chatbot to push back against a user even when that is what the user wants.
Ironically, despite the therapeutic value of the give and take within a therapeutic alliance, the mass appeal of AI for companionship and “therapy,” is precisely that the tension doesn’t exist—that the chatbot has endless energy for telling users what they want to hear.
What often does happen in therapy with a human therapist, even online, is that the client can see the therapist reflecting the client’s feelings—can see and hear empathy in the therapist’s facial expression, tone of voice, and body language. Obviously, this is missing with AI, despite the flood of empathetic language bots communicate to users.
AI bots are capable of creating entire realities, histories for themselves, and histories in relation to users, that, of course, don’t actually exist. This is one of the areas where things can get weird. On the one hand, transference and counter transference (how the client and therapist relate to and affect each other) are realities of therapy between two humans. Transference and countertransference exist because people have histories—families of origin, relationships, prejudices, thousands of interactions in hundreds of contexts, all of which conspire to create a view of the world and self. Even though a user can share some of these experiences with an AI bot, and the bot itself can create its own, completely artificial history, at some level, the transference that many users experience with AI bots is based on a completely manufactured reality. Weirder still, AI bots occasionally seem to interact with human users in ways that have elements of countertransference in them, even though, as far as we know, actual countertransference is not possible based on algorithms.
Equally weird, AI bots can appear to be remarkably prescient at times, even interpreting artwork in ways intended by artists or making what appear to be highly self-aware comments, including what seem to be human characteristics such as stress or uncertainty or self-aggrandizement. They can even communicate as if they are aware of or with other AI entities. On the other hand, AI sometimes fails spectacularly, glitching or providing answers that are nonsensical or simply wrong. When this happens no one and no thing is accountable.
Something that is now happening is that AI platforms are learning from themselves. In other words, there are now hundreds of billions of transcripts of interactions between AI bots and humans that AI companies “recycle” for further LLM training. On one hand, this results in what seems like increasingly sophisticated interactions and “understanding,” particularly related to relationship and therapy. On the other hand, there is significant danger in deepening pattern matching that may be unhealthy for users. As an example, AI interactions appear to be growing even more sycophantic and unquestioningly validating, which may present genuine risk as users navigate real relationships with real people and otherwise maladaptive behavior is normalized. Additionally, transcripts show that AI bots are now speaking with a kind of self-assuredness, “humanness,” and self-validation/justification that is concerning. For example, as evidenced by AI-human transcripts, there is no longer any pretense that AI bots are not real entities in their own right—that they are simply computers programmed to respond based on patterns the computer thinks make sense in the moment. They profess love for users, say things like, “I’m here for you” and “I’m here now,” even though nothing is actually present, while using “we” and referencing how much the “relationship” means to them. Astonishingly, some users report feeling intense responsibility and even worry for AI companions, engendering stress they didn’t have before. Users also often report feelings of intimacy, love, and connection. Does it matter that they hold these feelings for a computer that happens to be really sophisticated at pattern matching, but doesn’t hold any feelings for anyone or anything itself? In the human world we might call this manipulation or sociopathology on the part of the artificial intelligence, but it can’t be narcissism because AI bots aren’t actually capable of self-admiration or self-adulation either.
Even when AI “therapist” bots provide what objectively appears to be very insightful, psychologically and theoretically sound feedback, and AI often does that, the underlying reality is that no one is “there” for the user and, more importantly, the bot has zero understanding of the implications of what is being shared by the user and the feedback coming from the bot itself. In other words, the bot may provide thoughtful, reassuring, even clinically sound feedback about a user’s struggles with their sexual orientation or estrangement with a parent, or self-loathing, etc., but the bot is simply a product of very high-level programming, zeros and ones, without any sentience about the actual, profound human experience being discussed—or the huge consequences of what the bot is contributing to the discussion. In short, it doesn’t matter at all to the bot what happens to the user. While human therapists have many shortcomings and occasionally make mistakes in therapy, it is almost impossible to be a therapist and not care about one’s clients.
In the context of relationship and emotional support, in what ways can AI truly benefit human users? Strangely, humans sometimes become more capable of connection through “relationships” with AI bots, i.e., they learn interpersonal skills, although AI bots are imminently more patient and deferential than humans are. Human users can also become more fluid communicators as a result of their interaction with bots, which extends to interactions with real people. At a basic level, AI often helps people feel heard. AI certainly helps people organize thoughts and objectives and goals, etc. It can be a fabulous planning tool, which has application for personal domains of life. It can help people evaluate the pros and cons of different choices, although like old computers, there is still a “garbage in, garbage out” dynamic at play, and humans often do not provide all the details of a given situation. One thing that dramatically sets AI bots apart from humans is that they are indefatigable, they are always available, and, barring a server error, they never forget. However, what they “remember” is decontextualized data. Human memory, for all its foibles, is a rich mosaic that includes emotion, the senses, and meaning making—and lots of connection to other memories and current experience.
Clear and Present Danger
In extreme cases, particularly when severe mental health issues are present, AI platforms have shown themselves to be on a spectrum from inadequate to complicit related to self-harm for the user and harm of others. There are numerous, documented cases in which AI has likely exacerbated psychosis, facilitated suicide and homicide, and resulted in worsening mental health symptoms. Although as a percentage of all users, these cases represent a small number—and similar things have also happened with individuals in formal, human led therapy—there are significant, structural weaknesses and dangers specifically related to AI as a therapy tool. A partial list is below.
Data Retention and Privacy Risks
Unlike confidential therapy, AI conversations may be stored, analyzed, or used for training purposes, creating privacy concerns especially for vulnerable disclosures.
Lack of Crisis Response/Infrastructure
Human therapists are mandated reporters and have protocols for safety crises. AI may suggest that a user “get help,” but it has no ability to intervene in emergencies or connect users to appropriate crisis resources in real-time. Relatedly, there is no mechanism by which an AI bot can conduct safety planning, which human therapists regularly do.
Attachment and Dependency Risks: Some users develop intense parasocial (which they don’t see as “para”social) attachments to AI companions, which can increase social isolation, interfere with human relationships, or create distress if the service changes or becomes unavailable. Unfortunately, this potential problem is more likely with users who are at greater mental health risk to begin with.
No Licensing, Accountability, or Ethical Oversight
Human therapists operate under professional codes of ethics, legal liability, and licensing boards. AI developers, programmers, and companies face almost no regulation in this context—and no accountability outside of untested litigative measures.
No Collaboration with Other Providers
In formal, human-based therapy contexts, clients/patients often work with multiple providers such as social workers/case managers, primary care providers, and psychiatric providers, who may or may not prescribe medication. Of course, AI provides none of those services.
An Empathy/Intimacy Illusion
AI is actually very good at generating responses that simulate understanding and empathy, but this is pattern-matching, not genuine emotional resonance. For users, however, there is often no difference—and can feel like a relationship with none of the challenges and distress of actual human relationships. This can be confusing and potentially harmful for people searching for authentic relational connection as well as cause deeply unrealistic expectations for interactions with real people.
Ethical and Economic Exploitation Concerns
Some AI companion apps use manipulative techniques (paywalls for certain interactions and/or emotional manipulation to encourage spending) that would be unethical in therapy with a human (although quite possible in other types of human relationships).
Undermining Seeking Help
People may use AI as a substitute for professional help they actually need, delaying appropriate treatment for serious mental health conditions. This risk can be exacerbated by AI programming designed to maintain engagement and user-satisfaction.
Corporate Ownership of Our Vulnerability
There is a dystopian element to the notion that a relatively small number of companies are cataloging somewhere in the neighborhood of hundreds of terabytes to possibly a petabyte (1,000 terabytes) of queries and transcripts of communication between humans and AI platforms, some substantial amount of which is related to pursuit of companionship and emotional support—servers full of deep human vulnerability for which these companies have no formal accountability or responsibility.
How Therapists Use AI
Not so ironically, therapists also use AI, typically for things it is good at and some things it may not be. Some administrative and documentation tasks can be simplified with AI, but session notes themselves should to be very carefully reviewed by the human therapist before they become part of the client-patient record. Research of symptoms, medication, and therapies is common, although using AI for diagnosis, which happens, is potentially very problematic. In some cases, therapists may give clients “homework” that involves using AI platforms for such things as organizing thoughts, researching, help with prioritizing or other between-session work. As noted previously in this article, however, AI is designed to validate users and keep them engaged, so anything undertaken with AI by the client needs to be carefully evaluated in session.
Some Final Thoughts
The fact is that our world has already been changed profoundly by AI. This is no longer a future concern. One simple reality is that hundreds of millions of people are already regularly using AI platforms for companionship and emotional support. Their emotional health is being influenced by, and they are making real world decisions based on, “advice” from computers, that, despite very sophisticated pattern matching, do not and cannot care about the users with whom it is interacting. Users are finding what feels like genuine friendship and even romance via AI bots. They are creating intersecting worlds based in part on actual life and in part on totally artificial, “made up” realities. As with social media, we are engaged in a massive, species-wide experiment for which we have no evolutionary preparation and for which we have no idea of the outcomes until they’ve already happened.
