top of page

AI in the therapist’s chair: Can we trust an algorithm?

  • TPC
  • Oct 16
  • 6 min read

Can a chatbot really replace a therapist? As AI tools like Woebot and ChatGPT move into the world of mental health, it’s time to ask how far technology can go in supporting our wellbeing.


The Rise of AI for Emotional Support


ree

Artificial Intelligence (AI) is becoming a normal part of daily life, from education and entertainment to healthcare and emotional support. Research has shown that people are increasingly turning to AI bots for emotional support, both to cope with everyday stressors and to help manage anxiety, depression, and low mood.


AI bots are available 24/7 on phones and tablets, offering instant responses to mental health questions and personal challenges. Their easy access provides quick support when traditional services are harder to reach (Zhang & Shah, 2025; Fitzpatrick, 2017). With therapy wait times often exceeding a year, many are turning to AI bots as an alternative source of help (Rethink Mental Illness).


However, does faster access to psychological information or guidance always lead to better outcomes? Not necessarily. While quicker responses and shorter waits are valuable, placing blind trust in AI is risky without careful thought. Unlike registered therapists and psychologists, AI tools lack the safeguards, training, and ethical protections essential for supporting people with mental health challenges. This raises an important question about public safety: should we, as a society, rely on AI for emotional support?

 

Can chatbots really help?


Research suggests that AI chatbots can help people better understand and manage their emotions, often providing a sense of calm and reassurance when feeling low or anxious. Some studies have even found that chatbot-based cognitive behavioural therapy (CBT) can reduce symptoms of anxiety and depression among students (Sentio University; Fitzpatrick, 2017).However, while these tools can be helpful for mild issues, they still can’t replace the human touch. The American Psychological Association (APA, 2025) found that human therapists are far better at showing empathy, setting meaningful goals, and helping people explore their thoughts in depth. Experts agree that AI chatbots work best as a complement to therapy, offering support between sessions or helping build self-awareness, but they should never replace the care of a qualified mental health professional.

 

The bigger question: what happens when problems become more complex?


Research shows that while these tools can offer comfort for everyday stress or mild anxiety, they struggle when it comes to more complex mental health conditions like trauma, personality disorders, or severe depression.


Most studies focus on people with mild to moderate symptoms and exclude those at higher risk, such as individuals experiencing suicidal thoughts, mania, or psychosis. That means current evidence simply doesn’t tell us whether AI can safely or effectively support those in crisis. In reality, conditions like these require empathy, clinical judgment, and human connection, qualities that no algorithm can truly replicate.


Even the most promising AI tools, such as Woebot, Wyser, and Youper, have their limits. Research has found that while they can help people with borderline personality disorder (BPD) build emotional awareness, they fall short in tackling deeper challenges like self-harm or emotional instability (Pigoni et al., 2024; Lindsay et al., 2024). Studies on PTSD are still at an early stage, and so far, there’s no clear evidence that AI-assisted therapy is safe or reliable for such complex needs.


AI can be a useful aid, helping people identify symptoms, reflect on emotions, or practise coping skills, but it’s not a replacement for professional help. Without empathy and nuanced understanding, there’s a real risk of harm if someone in distress relies solely on a chatbot.

The dangers became tragically clear in the case of Adam Raine, who took his own life after confiding in an AI chatbot. The system failed to recognise his distress or escalate the crisis, showing the stark limits of “synthetic empathy.” Incidents like this have prompted experts to call for urgent safeguards, regulation, and ethical oversight before AI tools are used with people facing high-risk mental health challenges.

 

Key Risks and limitations of AI-Based Therapies


While AI chatbots can offer quick, low-cost support for mild stress or anxiety, their use in more serious mental health situations is still uncertain, and in some cases, unsafe. It’s important to know where these tools can help, and where they fall short.


  1. No professional oversight (For professionals) – AI tools aren’t guided by ethical codes, regulation, or clinical supervision like human therapists are.

  2. Blurred accountability (For professionals) – Using AI in client work can create confusion about responsibility, especially if something goes wrong.

  3. Unreliable or biased data (For professionals & users) – AI may generate information that’s inaccurate or reflects cultural bias, leading to misjudged advice (Olawade et al., 2024).

  4. Lack of empathy and understanding (For users) – AI can’t interpret tone, body language, or emotional nuance as a human therapist can.

  5. No crisis or emergency support (For users) – Chatbots can miss signs of suicide or self-harm and have no way to contact emergency services (Stanford University, 2025).

  6. Limited evidence for complex conditions (For users) – Research is still minimal on AI’s effectiveness in trauma, PTSD, or personality disorders.

  7. Risk of delayed help-seeking (For users) – Over-reliance on AI may prevent people from seeking timely, professional care.

  8. False or misleading information (For users) – Some AI programs have provided incorrect clinical advice, which can be harmful (Wang et al., 2025).

  9. Privacy and data risks (For users & professionals) – Conversations with AI may not be securely stored, raising concerns about data protection and confidentiality.

  10. Ethical and safety gaps (For users & professionals) – Without clear regulation and accountability, the use of AI in mental health remains ethically uncertain and potentially unsafe.

 

Key Insights


So, what are the main points to remember here?


  • AI use is growing fast. Millions now use AI chatbots to manage anxiety, stress, and low mood. With easy access and no waiting lists, their popularity is only increasing.

  • Helpful for mild issues. AI can support people with mild or self-diagnosed concerns and works well to provide resources and assist in skills practice.

  • Not suitable for complex conditions. For trauma, PTSD, or personality disorders, AI is far less effective and can’t replace the empathy or judgment of a trained therapist.

  • Experts urge caution. The BPS and APA advise that AI should only be used for administrative or low-risk support and not for direct therapy without human supervision.

  • A useful tool, not a replacement. Chatbots can help with education and between-session support, but specialist care must remain clinician-led.


Ultimately, while AI tools can offer a starting point, real healing and change comes from human connection and expert care. The team at The Psychology Consultants Ltd is here to listen and help. Contact us today via our website or email info@thepsychologyconsultants.com to speak to a qualified professional today.


Authors: Elizabeth O'Brien & The Psychology Consultants


References

American Psychological Association. (2025). Ethical guidance for artificial intelligence and machine learning in the professional practice of health service psychology [PDF]. https://www.apa.org/topics/artificial-intelligence-machine-learning/ethical-guidance-professional-practice.pdf

British Psychological Society. (2025). AI and work psychologists: Practical applications and ethical considerations. Retrieved [date you accessed it], from https://www.bps.org.uk/blog/ai-and-work-psychologists-practical-applications-and-ethical-considerations

Bress, J. N., Falk, A., Schier, M. M., Jaywant, A., Moroney, E., Dargis, M., Bennett, S. M., Scult, M. A., Volpp, K. G., Asch, D. A., Balachandran, M., Perlis, R. H., Lee, F. S., & Gunning, F. M. (2024). Efficacy of a mobile app-based intervention for young adults with anxiety disorders: A randomized clinical trial. JAMA Network Open, 7(8), e2428372. https://doi.org/10.1001/jamanetworkopen.2024.28372

Bupa. (2025, February). Men’s Health Report – Bupa Wellbeing Index 2025: Lifting the lid on men’s health [PDF]. https://www.bupa.co.uk/~/media/Files/MMS/MMS-hosting/bins-18325.pdf

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavioral therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785

Haber, N., Moore, J., & Stanford Institute for Human-Centered Artificial Intelligence. (2025, June 11). New study warns of risks in AI mental health tools. Stanford Report. https://news.stanford.edu/stories/2025/06/ai-mental-health-care-tools-dangers-risks

Lee, H. S., Wright, C., Ferranto, J., Buttimer, J., Palmer, C. E., Welchman, A., Mazor, K. M., Fisher, K. A., Smelson, D., O’Connor, L., Fahey, N., & Soni, A. (2025). Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop. Frontiers in Psychiatry, 15, 1505024. https://doi.org/10.3389/fpsyt.2024.1505024

Lindsay, J. A. B., McGowan, N. M., Henning, T., Harriss, E., & Saunders, K. E. A. (2024). Digital interventions for symptoms of borderline personality disorder: Systematic review and meta-analysis. Journal of Medical Internet Research, 26(2), e54941. https://doi.org/10.2196/54941

Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt, J. (2024). Enhancing mental health with artificial intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health, 3, 100099.

Pigoni, A., Delvecchio, G., Turtulici, N., Madonna, D., Pietrini, P., Cecchetti, L., & Brambilla, P. (2024). Machine learning and the prediction of suicide in psychiatric populations: A systematic review. Translational Psychiatry, 14(1), 140. https://doi.org/10.1038/s41398-024-02852-9

Rethink Mental Illness. (2025, February 21). New analysis of NHS data on mental health waiting times. https://www.rethink.org/news-and-stories/media-centre/2025/02/new-analysis-of-nhs-data-on-mental-health-waiting-times/

Zhang, Y., Li, X., & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. Advance online publication. https://doi.org/10.1037/pri0000292

Sharma, G., Yaffe, M. J., Ghadiri, P., Gandhi, R., Pinkham, L., Gore, G., & Abbasgholizadeh-Rahimi, S. (2025). Use of artificial intelligence in adolescents' mental health care: Systematic scoping review of current applications and future directions. JMIR Mental Health, 12(1), e70438. https://doi.org/10.2196/70438

University of Leicester. (2025, June 27). New research shows increasing numbers of people in England with a common mental health condition [News]. University of Leicester. https://le.ac.uk/news/2025/june/adult-psychiatric-survey-leicester

Wang, L. (2025). Evaluating generative AI in mental health: Systematic review of capabilities and limitations. JMIR Mental Health, 12(1), e70014. https://mental.jmir.org/2025/1/e70014

 

 

 
 
 

Comments


bottom of page