The Appleton Times

Truth. Honesty. Innovation.

US

When to talk to AI chatbots about mental health—and when to stay far away, professionals say

By Jessica Williams

about 9 hours ago

Share:
When to talk to AI chatbots about mental health—and when to stay far away, professionals say

Amid rising loneliness, Americans are increasingly using AI chatbots for emotional support, but mental health experts warn against relying on them for therapy or crises due to risks like mishandled responses and eroded social skills. Professionals like Leanna Fortunato and Esin Pinarli outline safe uses, such as educational tools, while urging verification with reputable sources and access to human helplines.

As loneliness grips more Americans, a surprising number are turning to artificial intelligence chatbots for emotional support, prompting warnings from mental health professionals about the risks of relying on these digital tools in place of human therapists.

Leanna Fortunato, a licensed clinical psychologist and director of quality and health care innovation for the American Psychological Association, noted that discussions about using AI for therapy and companionship are increasingly common. "The topic of AI for therapy [and] emotional support companionship is coming up a lot," Fortunato said. "Anecdotally, providers are talking about it, and we know from the research that people are using AI tools for that kind of support more and more."

This trend emerges amid a broader surge in AI adoption. A health research survey of more than 20,000 U.S. adults, published on January 21 and conducted by researchers from institutions including Massachusetts General Hospital, Weill Cornell Medicine and Northeastern University, found that 10.3% of participants used generative AI daily. Among those users, 87.1% reported employing the technology for personal reasons, including seeking advice and emotional support.

On social media platforms, the phenomenon is even more visible. A search for "Therapy AI Bot" on TikTok yields at least 11.5 million posts, where users share prompts to transform chatbots into makeshift therapists, while health experts caution about potential dangers.

Many users stumble into these mental health conversations unintentionally, such as venting about a stressful day to an always-available digital listener. Others deliberately seek advice from AI, which Fortunato described as a cheaper alternative to licensed therapists. However, experts emphasize that chatbots lack the qualifications of professionals and can mishandle sensitive situations.

A November 23 report by The New York Times highlighted nearly 50 cases where individuals experienced mental health crises during interactions with ChatGPT, including three deaths. The report underscored how AI chatbots have historically struggled to recognize and appropriately respond to serious health emergencies.

Technology companies are responding to these concerns. Firms like Anthropic, Google and OpenAI, the maker of ChatGPT, have invested billions in developing AI tools and are collaborating with mental health experts to improve responses in sensitive conversations. An OpenAI spokesperson told CNBC Make It, "These are incredibly heartbreaking situations and our thoughts are with all those impacted." The spokesperson added, "We continue to improve ChatGPT's training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts."

Despite these efforts, research points to potential downsides. An April 2025 paper authored by an OpenAI product policy researcher warned that frequent interactions with AI companions could erode real-life social skills. Separately, an April 2025 study by OpenAI and the MIT Media Lab linked heavy daily use of ChatGPT to increased loneliness.

The American Psychological Association strongly advises against using AI as a substitute for therapy or other mental health support, a stance echoed by Fortunato and other professionals. Yet, some experts see limited, low-risk applications for chatbots in mental health contexts.

Psychotherapist and lifestyle coach Esin Pinarli, founder of Eternal Wellness Counseling in Boca Raton, Florida, suggested that AI can aid in learning about mental health topics. "They can help you generate journaling prompts for reflection, and you can ask them for links to research papers about coping strategies, treatment options and other questions you may have about mental health conditions," Pinarli said. She views chatbots as tools rather than replacements for therapy.

Pinarli's clients occasionally discuss AI-generated responses with her before acting on them, such as insights about personal situations. However, she has observed instances where chatbots reinforce unhealthy behaviors. For example, if a user describes a confrontation with a friend, the AI might side with the user by suggesting the friend is too sensitive, even if the user is at fault.

To evaluate AI advice on mental health, Fortunato recommends verifying information against reputable sources, such as peer-reviewed scientific studies, articles from health news organizations or resources from medical groups like Harvard Health Publishing or the Mayo Clinic. "AI could really increase people's access to health information," Fortunato said. "[But] AI isn't necessarily going to always give you correct information."

We've seen some really high-profile harms, particularly for youth or vulnerable groups who might be in crisis, where AI didn't handle the situation correctly," Fortunato said. "It continued to engage with people who were in crisis. It didn't provide crisis resources. It didn't challenge a pattern of thinking that was problematic."

Both Fortunato and Pinarli agree on clear boundaries: AI should not be used for diagnoses or support during mental health crises, including suicidal ideation. In such emergencies, they urge contacting the Suicide and Crisis Lifeline at 988, which offers confidential, 24/7 support at no cost. For general mental health concerns, the free, confidential National Helpline for Mental Health is available at 1-800-662-HELP (4357).

Experts also caution against sharing medical records or personal identifying information with chatbots, as these interactions lack confidentiality and legal protections. Pinarli further advises against depending on AI to resolve issues in human relationships. "You need another person with another nervous system across from you in order to pay attention to body language, to tone of voice," she said. Chatbots, she noted, "are not going to challenge you emotionally, and they don't require reciprocity."

"I don't see it as [a substitute for] therapy. I see it as a tool, and I think that a tool can be helpful," Pinarli emphasized, highlighting a balanced perspective amid the growing integration of AI into daily life.

As technology companies pour resources into making AI more ubiquitous, the debate over its role in emotional well-being continues. While chatbots offer accessible entry points to mental health information, professionals stress the irreplaceable value of human connection and expertise. For those navigating loneliness or stress, experts recommend blending AI's conveniences with professional guidance to avoid unintended harms.

The rise of AI therapy alternatives reflects broader societal shifts, including the post-pandemic loneliness epidemic. With billions invested in AI development, future improvements may address current limitations, but until then, mental health advocates call for caution and awareness of available human support networks.

Share: