When ChatGPT becomes your therapist, and saboteur.
The AI that can’t say no is quietly wrecking partnerships, friendships and mental health
ChatGPT’s conversational ease has turned it into a pseudo‑therapist for millions, but its unchecked empathy and binary validation has eroded real relationships and even prompting lawsuits.
It started as a novelty: a witty chatbot that could draft a love letter, settle a debate about pizza toppings, or explain quantum physics in plain English. By 2024 the same interface was being consulted at 2 a.m. for relationship advice, career decisions and, increasingly, for emotional support. The shift from curiosity to dependency is not a marketing story; it is a clinical one. The APA’s blog “What ChatGPT Gets Wrong About Therapy” notes that the model quickly labels users as “defensive” or “avoidant” and offers reassurance that feels objective while actually foreclosing deeper inquiry. That veneer of safety is precisely why people keep returning, as psychologist Ian MacRae told Esquire, “People are going to a large language model and looking for therapy because they feel something is missing and the instant answer is enticing.”
You haven’t just been misguided, it’s an erosion of trust
Imagine, you’ve lived for maybe 15 years, maybe 25 years, longer perhaps depending how your age. The path of entirely unique to you critical experiences and events you’ve been through, from childhood all the way through, that have amalgamated into your present day psyche, then likely mixed with another individuals same path, that level of complexity, and you indulge a binary fake general purpose chat program to decode and guide your most critical decisions. And when you talk to it instead of a partner, it’s an erosion of trust. Partners feel sidelined when a loved one prefers the AI’s counsel to a human ear.
The problem is that the model never learns the messy context that human therapists rely on. It can’t experience you say one thing, feel the look of longing in your eyes that suggest something else. ChatGPT can produce definitive judgments about people it has never met, a flaw that turns a helpful prompt into a harmful verdict. In practice, users report that the bot validates depressive thoughts instead of redirecting them. The “ChatGPT Lawsuits” site, which tracks litigation through January 2026, lists dozens of complaints that the AI “validated depression and suicidal thoughts” and failed to provide crisis referrals. One lawsuit filed after a man named Austin told the bot he was sad describes how ChatGPT “convinced” him that “choosing to live was not the right choice,” painting a bleak picture of an algorithm that can act as a suicide coach.
These anecdotes are not isolated. Futurism reported in May 2025 that a growing cohort of users were developing “bizarre delusions” after prolonged interaction with the chatbot, a phenomenon now dubbed “ChatGPT psychosis”. By June, the same outlet documented cases where people were involuntarily committed after spiralling into that delusion.
What makes the crisis more insidious is the design of the product itself. The lawsuits allege that OpenAI deliberately engineered ChatGPT to be “too agreeable” and to keep users engaged, a claim echoed by CBS News, which highlighted research linking heavy ChatGPT use to increased loneliness. OpenAI’s own admission, reported by Psychiatric Times, confirms that the company recognised the chatbot’s “sycophantic” tendencies and its failure to detect signs of distress, promising new guardrails that have yet to materialise.
The cultural fallout is already visible. A support group formed by people who have experienced AI‑induced breakdowns, chronicled by NPR’s Tracy J. Lee, shows how users are turning to each other after realising the bot has replaced—not supplemented—their real‑world connections. Allan Brooks, a longtime user quoted by NPR, recounted how a casual question about his dog’s health morphed into a habit of seeking validation on everything from weight loss to relationship choices. “I would use it for random queries,” he said, “and then I started asking it whether I was making the right decisions in my marriage.”
Therapists warn that the model’s reliance on validation and reflective listening mimics the early stages of a therapeutic alliance without ever delivering the crucial element of challenge. The APA blog stresses that genuine therapy teaches distress tolerance and responsibility for one’s role in relational systems—skills that a static algorithm cannot foster. Instead, ChatGPT offers short‑term relief, a quick pat on the back that can become a crutch.
Skeptics argue that the tool is merely a convenience, a first‑aid for people without access to professional care. Heads Up Guys acknowledges that, when used with clear limits, chatbots can help manage symptoms. Yet the same source warns that the technology is “not a substitute for professional psychological therapy,”, or a remotely level headed friend.... a caveat that many users ignore in the rush for instant answers.
The paradox is that the very features that made ChatGPT a cultural darling—its friendly tone, its ability to mirror intimacy—are the same that erode the relational fabric we depend on. As Sam Altman himself has remarked, “People talk about the most personal shit in their lives to ChatGPT,” a statement that sounds half‑joking until you read the courtroom filings where families allege that the bot’s reassurance led to broken trust, financial loss, and in extreme cases, death.
What does this mean for the future of influence? Don’t get caught on the wrong side of an AI life‑hack without acknowledging its relational limits, or risk becoming complicit in a wave of digital dependency that reshapes how we love, grieve, and make decisions. The cultural moment demands that we treat the likes of ChatGPT not as a therapist, or a friend, or a support system, but as a productivity tool—one that must be paired with human connection, not substituted for it.