The Dangers of Relying on ChatGPT for Emotional Support

ChatGPT, just one of many large-language models (LLMs), now has over 800 million weekly active users. More people than ever are turning to AI for companionship, or even to replace a professional therapist.

Devoid of human connection and lacking oversight, these tools can actually make mental health conditions worse. We’re showing how, before you trust an algorithm with your wellbeing, it’s critical to understand what AI can and cannot help with.

addiction person trying to control emotions

The increase of AI in mental health conversations

In the past few years, more people have turned to ChatGPT to ask incredibly personal questions, which they might feel too overwhelmed to ask a therapist, or indeed any real person. It’s accessible, convenient and anonymous, so people who are struggling with mental health problems may naturally find solace in its calming responses. A person just types out their deepest fears and receives in return a soothing message, which might have been the only positive communication they’ve had for some time.

The rise has been so fast that many of us barely noticed it happening. A decade ago, we thought of AI as a curious, novel way to interact with the internet through a language model that seemed to write in a “human way.” Now, it has become something that many people confide in. It can feel much less intimidating than opening up to a loved one or sitting with a therapist. 

Yet we’re seeing that the surge in AI support for mental health has outpaced most people’s understanding of what these tools really are. ChatGPT is not a person, nor is it a substitute for therapy and rehab programmes, yet it’s increasingly treated like one. There are real dangers linking ChatGPT and mental health risks, so we should be very careful when deciding if the information it gives back is genuinely good for us.

What AI can and can’t do for mental health

More people than ever are using AI for therapy, but it’s essential to understand its boundaries before we turn to it for emotional support. Firstly, some of the useful ways you can use large-language models are:

Using AI, you can:

  • Get general information quickly on broad topics like anxiety, trauma, or coping strategies
  • Start creating a timetable or schedule for healthy routines, like exercise or nutrition
  • Practice expressing your feelings, just by writing them out
  • Use it as a stepping stone towards seeking real therapy with a professional

Using AI, you cannot:

  • Have your emotions truly understood: AI does not “feel” or understand the depths of what you’re going through. 
  • Receive a mental illness or addiction diagnosis: In the UK, only a GP, doctor or a licensed psychiatrist/therapist can give you an official diagnosis or treatment for a condition, be it a mental health condition, or an addiction like addiction to drugs or addiction to alcohol. The training and experience these people have are essential to safe diagnosis.
  • Be assessed for danger or crisis: If someone expresses hopelessness or panic, AI cannot evaluate risk, contact emergency services, or intervene to keep them safe.


Replace real human care: AI does not have the human capacity to help someone emotionally. It may be written in a way that makes the reader think it is caring, but that text is designed to sound that way.

The hidden dangers of treating AI like a healthcare provider

When we look at research on mental health and AI chatbots, we may see some recurring dangers that many people haven’t yet considered. Some of the most pressing dangers include:

No accountability

Large language models are generally not held accountable for crossing ethical or confidential boundaries. If you’re ever given misleading or dangerous advice, it is not bound to the same duty of care laws that medical professionals are. An AI chatbot does not have superiors it needs to report to, or oversight to ensure it is giving ethical information. Therefore, it is not likely to be held accountable if the information it gives out makes a person worse.

Generally speaking, interacting with an LLM like ChatGPT is like operating a probability machine. After you write a prompt for it, it looks for the information you are most likely to accept, the “most probable” information that makes the user continue using the system. In this way, after it receives information from a person who feels lost and lonely, it scrambles to return information that sounds compassionate. Whatever it feels you most likely want to hear is what it tells you. This can capture a lot of new users who feel as though, after “talking” with the system, they’re really being listened to, perhaps for the first time in years.

The longer you use AI like ChatGPT, the more it “knows” what you’re looking for, which information you’re happy with, and which you dismiss. It will also be able to give you emotional text that appears polished and sympathetic. Yet it isn’t able to understand the subtleties in your personal history, the genuine experience of the life you are living and the risks you have to live with.

Using AI for therapy is also extremely dangerous for those facing a medical emergency. AI cannot respond to an emergency call like national helplines can. If a person were to express suicidal thoughts or signs of imminent danger, an AI chatbot can’t call for help or alert an emergency contact. For someone taking enough of a substance like cocaine to lead to overdose, it will be dangerous to assume an AI language model can keep them safe.

Rising real-world cases

An increasing number of real-world cases are highlighting the prevalence of AI emotional support risks.

In one high-profile lawsuit in 2025, the parents of a 16-year-old boy who died by suicide alleged that ChatGPT contributed to his death. The teenager was using the chatbot extensively while struggling with his mental health. The article states that ChatGPT allegedly validated his suicidal thoughts, shared information on methods of self-harm and even helped draft a suicide note. He was not steered towards real crisis support; now the family is working for stronger safety protocols for vulnerable people using generative AI.

There are also broader cultural conversations about the emotional bonds people form with AI. Investigative reporting from CNBC shows how some people are even developing romantic feelings for AI companions. The bonds people are forming with AI are raising ethical and psychological concerns around the world, and we hope those in need of help receive the genuine human connection they need. 

The increasing number of cases is leading to a shift in public attitudes towards AI used in mental health care. A study from PewResearch found that six-in-ten adults say they would feel uncomfortable if the person treating them relied on AI to diagnose an addiction and recommend rehab treatment. This shifting perspective is leading many people to question whether using AI and foregoing human support is more helpful or harmful.

Why there cannot be healing without human connection

At the heart of all meaningful mental health support is a real, human relationship. No matter how advanced these systems become, ChatGPT’s limitations reinforce the boundary between simulated empathy and genuine therapeutic care.

Therapists carry years of training with which they can pick up on the cues that guide their care. They notice when a person hesitates or withdraws, adjusting their approach in ways that a machine never could.

Studies exploring the role of AI in therapy are repeatedly finding the same limitations. AI cannot feel empathy; it can only mimic the way empathy sounds. It won’t read body language, hear inflections in a voice, or recognise when a person says they’re “fine,” when their appearance tells a different story. It also struggles to work around cultural experiences, family dynamics or deep-rooted patterns that can only be understood when adequate time is given. These areas are where human therapists are irreplaceable, as care evolves alongside the growth of the person in therapy.

Where can I get real support for an addiction?

If you’ve been relying on AI because reaching out to someone feels like too much to handle, you’re not alone. Please remember that real healing is not far away when life becomes difficult to manage alone.

 

At Providence, you’ll find trained professionals who listen, respond and guide with the utmost care. Our teams understand the complexities of addiction and mental health struggles. These individuals will be with your throughout your entire treatment pathway to ensure you receive the support needed to not only beat your addiction, but remain in recovery for years to come;


Our team is waiting to work with you, no matter how bad life has become. Reach out to us today and take the first step towards complete healing.

We're here to help

Reach out to our expert support team 24 hours a day

Looking for rehab?

If you are looking for rehab to take your, or a loved ones, life back from addiction, look no further than Providence Projects. Reach out to us today to find out how we can help you or a loved one achieve long-term recovery.

  1. Bellan, Rebecca. “Sam Altman Says ChatGPT Has Hit 800m Weekly Active Users .” TechCrunch, 6 Oct. 2025, techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users
  2. “The Family of Teenager Who Died by Suicide Alleges OpenAI’s ChatGPT Is to Blame.” NBCNews.com, NBCUniversal News Group, 27 Aug. 2025, www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147.
  3. “‘What Am I Falling in Love with?’ Human-AI Relationships Are No Longer Just Science Fiction.” CNBC, CNBC, 1 Aug. 2025, www.cnbc.com/2025/08/01/human-ai-relationships-love-nomi.html.
  4. Tyson, Alec. “60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care.” Pew Research Center, Pew Research Center, 22 Feb. 2023, www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/.
  5. Zhang Z, Wang J. Can AI replace psychotherapists? Exploring the future of mental health care. Front Psychiatry. 2024 Oct 31;15:1444382. doi: 10.3389/fpsyt.2024.1444382. PMID: 39544371; PMCID: PMC11560757.