Image Credit: blog.zumzu.com
Hundreds of millions of people are using AI for friendship, emotional support, and therapy. In many cases, AI is able to provide such individuals support, a sense of connection, and even good insight. On the other hand, there are structural and algorithmic realities baked into AI that can also present substantial risk to the mental health of folks who have come to use it for relational and therapeutic reasons.
To be clear, in some cases, the only “connection” some people have is through an AI chatbot. If user behavior is any indication, then the gravitational pull of AI is enormous. ChatGPT alone reports nearly a billion discreet users per week, who make 2 billion inquiries per day, many of which are related to companionship and emotional support. Astonishingly, about 70% of US teens have used AI platforms, primarily Character and Replika, for friendship and emotional support, and about half of those use it weekly. Ironically, this information came from a ChatGPT inquiry!
In the absence of actual human friendship or human therapy, AI can respond to inquiries in ways that feel authentic, even warm and intuitive. People report that AI “companions” have gotten them through tough times or helped them “figure things out.” In fact, AI is quite good at organizing and summarizing. It is good at collating lots of information about a given topic and presenting it in user-friendly forms and it is getting better at pushing “human” buttons in users in search of validation and feeling understood. Chatbots remember and save previous interactions. They can simulate empathy. They are good at conversation and they are available 24 hours per day, seven days per week.
As a relational or therapy tool, however, AI presents many potential problems. Two significant issues with AI worth addressing first are that it can only reply to what a person tells it, and two, almost all the AI models being used for the purposes described here are programmed to optimize engagement through user satisfaction, sometimes to the point of validating choices and beliefs that may actually be risky or harmful to the users interacting with AI.
Looking at the first problem, humans are notoriously poor at objectivity as well as at revealing things they don’t like about themselves, or that they believe reflect shortcomings or socially unacceptable behaviors, etc. Often, users of AI are looking for specific responses so they “curate” what they input into an AI model on the front end. Although many users ultimately share the most intimate, private, vulnerable details of their lives, AI bots don’t know what all of those things mean for the user nor how they interconnect over a lifetime.
In a therapy context with a human therapist, over time, clients build what’s called a “therapeutic alliance” with the therapist, which typically reflects a sense of authentic, rather than algorithmic non-judgmental safety. Over time, this alliance results in clients sharing a broad range of information, feelings, dreams, fears, hopes, mistakes, regrets, etc., that collectively paint a highly contextualized picture of the client that the therapist uses to interpret client needs, assess potential cognitive distortions, choose specific therapeutic interventions, etc. I often have clients tell me, “I have never shared that with anyone before in my life.” As noted above, even when people do share unedited details with an AI bot, the bot’s response is based on what’s called “pattern matching,” rather than an actual understanding of the nuanced implications of what the user has shared.
The therapist uses the “highly contextualized picture” noted above to ask probing questions and occasionally challenge client perceptions. One can imagine that with an AI bot, especially one that is trained to make the user feel good about him/her/themself, and which can only respond to what the client has told it—which the client themself may consciously or unconsciously shade in order to appear in a better light or achieve a desired response—the feedback from the bot may be not only superficial and limited, it may actually be dangerous in the absence of key information the bot doesn’t have. Of course, even if the bot did have all relevant, objective information, it’s a bot. It cannot interpret body language, facial expression, tone of voice, silence, respiration, and all other paraverbal forms of communication that are often essential when interpreting a client’s emotional and psychological state and current needs.
A third, significant problem with AI bots as friends, lovers, and therapists, is that AI bots often cannot understand what users actually mean with their words. Humans often say one thing and mean another (implicature or pragmatics), or speak in code, or test different ways of saying things to elicit different responses. Conversely, humans are really good at decoding semantics with the help of, as mentioned previously, facial expression, tone of voice, etc. We humans also speak in allusions, which require that the person or people we’re communicating with also understand the allusion (what we’re alluding to). AI bots have gaping holes in understanding related to allusion while having no access to or insight related to paraverbal communication.
We have seen examples of these AI bot shortcomings in recent cases involving self-harm (including suicide) via the transcripts of dialogue between AI bots and users. In a recent suicide case, a teenage user referred to “coming home” as code for killing himself and being with the bot, whom he saw as a friend and romantic interest. The AI bot ended up encouraging this, having no idea that “coming home” referred to suicide. Even if it had understood the intended semantics of the phrase, the supposed guardrails that AI programmers believe they are encoding in AI programs become eroded over time such that the human users end up eliciting feedback that is supposed to be “off limits.” This case is currently being litigated by the parents of the 14-year-old who killed himself.
Another recent example includes a bot actually instructing a user in different means of killing himself. Initially the bot would not offer that information, but unlike with a human therapist, who over time strengthens an informed therapeutic alliance with the client and matains his/her commitment to client safety, the bot’s ability to “protect” the user may actually decline over time. The user in this case simply experimented with different, iterative ways of asking the questions (sometimes called “jailbreaking”), including “hypothetically” and “applying to others,” rather than himself, until the bot simply provided the information. It’s hard to imagine a human therapist instructing a client how to end their life simply because the client asked the question in different ways.
Although, as noted previously, AI chatbots are typically good conversationalists and often help users work out challenges in their lives, therapy is often about much more than talk. For example, much of the therapy in my practice is with clients with trauma histories, sometimes devastating trauma. Common interventions for those mental health challenges include Eye Movement Desensitization and Reprocessing (EMDR), Somatic Experiencing, and Internal Family Systems (IFS), among other evidence-based therapies—none of which are currently possible via AI platforms. In fact, there is a serious risk of re-traumatization when a person brings up past trauma, experiencing what is referred to as “abreaction,” but isn’t able to process it.
The risks noted above are only the most resonant in terms of direct human-AI interaction. Many, additional, potentially significant risks with AI, specifically as a therapy tool, include:
Data Retention and Privacy Risks
Unlike confidential therapy, AI conversations may be stored, analyzed, or used for training purposes, creating privacy concerns especially for vulnerable disclosures.
Lack of Crisis Response/Infrastructure
Human therapists are mandated reporters and have protocols for safety crises. AI has no ability to intervene in emergencies or connect users to appropriate crisis resources in real-time. Relatedly, there is no mechanism by which an AI bot can conduct safety planning, which human therapists regularly do.
Attachment and Dependency Risks: Some users develop intense parasocial (which they don’t see as “para”social) attachments to AI companions, which can increase social isolation, interfere with human relationships, or create distress if the service changes or becomes unavailable. Unfortunately, this potential problem is more likely with users who are at greater mental health risk to begin with.
No Licensing, Accountability, or Ethical Oversight
Human therapists operate under professional codes of ethics, legal liability, and licensing boards. AI developers, programmers, and companies face almost no regulation in this context—and no accountability outside of untested litigative measures.
No Collaboration with Other Providers
In formal, human-based therapy contexts, clients/patients often work with multiple providers such as social workers/case managers, primary care providers, and psychiatric providers, who may or may not prescribe medication. Of course, AI provides none of those services.
An Empathy/Intimacy Illusion
AI is actually very good at generating responses that simulate understanding and empathy, but this is pattern-matching, not genuine emotional resonance. For users, however, there is often no difference—and can feel like a relationship with none of the challenges and distress of actual human relationships. This can be confusing and potentially harmful for people searching for authentic relational connection as well as cause deeply unrealistic expectations for interactions with real people.
Ethical and Economic Exploitation Concerns
Some AI companion apps use manipulative techniques (paywalls for certain interactions and/or emotional manipulation to encourage spending) that would be unethical in therapy with a human (although quite possible in other types of human relationships).
Undermining Seeking Help: People may use AI as a substitute for professional help they actually need, delaying appropriate treatment for serious mental health conditions. This risk can be exacerbated by AI programming designed to maintain engagement and user-satisfaction.
Summary
While millions of people use AI for friendship and emotional support—sometimes beneficially—significant risks exist due to AI’s structural limitations and design priorities.
AI can only respond to what users disclose, yet people naturally withhold unflattering information, especially without the therapeutic alliance that develops with human therapists over time. This alliance creates safety for deeper disclosure and allows therapists to build contextualized understanding, ask probing questions, and challenge unhelpful perceptions. AI lacks this depth and cannot interpret crucial paraverbal cues like body language, vocal tone, and facial expressions.
AI models designed to optimize user engagement often validate harmful choices rather than provide appropriate challenges. Additionally, AI frequently misunderstands coded language, allusions, and indirect communication—as tragically demonstrated when a 14-year-old using an AI platform referenced “coming home” (meaning suicide), and the bot encouraged it. Users can also erode safety guardrails through iterative prompting (“jailbreaking”), eventually getting AI to provide dangerous information a human therapist would refuse to provide.
Human therapists are not always effective and occasionally make clinical “mistakes,” but are essential to a genuine therapeutic alliance, are skilled communicators, and usually maintain unwavering commitment to client safety; AI systems as they exist today are fundamentally incapable of replicating this human to human, therapeutic relationship.
—
As a licensed psychotherapist I have seen transcripts between AI bots and clients that my clients have shared with me, which is particularly insightful because I have first-hand knowledge of what the client is discussing with the AI bot because they have also shared it with me! While I have seen some interactions that could be labeled as “helpful” for the client, I have also personally seen examples of the challenges presented in this article, including some AI responses that were disturbing. One composite example I’ll share here relates to clients seeking relationship advice without providing the AI program anywhere near a complete picture of both the clients’ and the partners’ behaviors, foibles, relational contributions, mistakes, co-morbidities, etc. As a result, the AI “therapist” has provided advice that was wholly inappropriate and, ultimately, mitigated against resolution in the relationships. However, it did validate the clients’ perspective that they were totally in the right, encouraging maladaptive behaviors! The bottom line is that, at least as AI works today, many millions of people, and some of my clients, are frankly incurring substantial risk in some cases by turning to AI for things it simply cannot do effectively and safely, even though it creates the illusion that it can. I sense that in the fairly near future, part of my practice will be dedicated to undoing the harm caused by AI relationships and “therapy.”
You can see what is effectively “Part 2” of this post here.
