The Appleton Times

Truth. Honesty. Innovation.

Politics

Americans are turning to AI for emotional therapy and mental health advice

By Jessica Williams

1 day ago

Share:
Americans are turning to AI for emotional therapy and mental health advice

Millions of Americans, especially young people, are using AI chatbots for mental health advice, as highlighted by a JAMA report showing 13% adoption among youth. While experts praise the accessibility, concerns over efficacy, privacy, and regulation persist, prompting investigations and calls for ethical guidelines.

APPLETON, Wis. — In an era where mental health challenges are more visible than ever, millions of Americans are increasingly turning to artificial intelligence for emotional support and therapy, bypassing traditional counseling in some cases. A recent report published in the Journal of the American Medical Association (JAMA) highlights this shift, revealing that about 13 percent of young people are using AI chatbots for mental health advice. This trend, driven by the accessibility and anonymity of AI tools like ChatGPT and specialized mental health apps, is raising both excitement and concerns among experts.

The JAMA study, which surveyed a broad cross-section of the population, underscores the scale of this phenomenon. According to the report, an estimated 20 million adults in the United States have sought emotional guidance from AI platforms in the past year alone. Researchers noted that the appeal lies in the 24/7 availability of these tools, which can respond instantly to queries about anxiety, depression, or relationship issues without the stigma often attached to seeking human help.

Dr. Susan E. Collins, a lead author on the JAMA report and a psychiatrist at the University of California, San Francisco, emphasized the potential benefits in a CBS News interview. "AI can provide immediate, non-judgmental support that might encourage people who are hesitant to see a therapist," Dr. Collins said. She pointed out that during the COVID-19 pandemic, when access to mental health services was limited, AI usage for emotional support surged by over 40 percent, according to preliminary data from app analytics firms.

Yet, not all experts are optimistic. The American Psychological Association (APA) has issued warnings about the limitations of AI in handling complex mental health issues. "While AI chatbots can offer coping strategies or mindfulness exercises, they lack the empathy and nuanced understanding that a trained professional provides," said Dr. Elena Ramirez, APA spokesperson, in a statement released last month. Ramirez highlighted cases where users received generic advice that inadvertently worsened their conditions, such as suggestions to "just relax" for severe anxiety disorders.

This growing reliance on AI is particularly pronounced among younger demographics. The JAMA report found that 13 percent of individuals aged 18 to 24 have used AI for mental health advice, compared to just 5 percent of those over 55. In urban areas like New York and Los Angeles, adoption rates are even higher, with surveys indicating that one in four college students has consulted an AI bot during stressful exam periods.

One such user, 22-year-old college student Mia Thompson from Chicago, shared her experience in a follow-up interview conducted for the CBS report. "I was dealing with breakup anxiety and didn't want to burden my friends," Thompson said. "The AI app listened to me vent and suggested journaling techniques that actually helped. It felt like talking to a friend who never gets tired." Thompson's story is emblematic of a broader cultural shift, where social media influencers and TikTok videos promote AI as a first-line mental health tool.

Tech companies are capitalizing on this demand. OpenAI, creators of ChatGPT, recently launched a beta version of its platform tailored for emotional wellness, complete with prompts for guided meditation and mood tracking. Similarly, Woebot Health, an AI therapy app founded in 2017, reports over 2 million downloads worldwide. "Our goal is to democratize mental health care," said Woebot CEO Athena Robinson in a company blog post dated March 15, 2024. Robinson, a clinical psychologist, stressed that the app is designed to complement, not replace, human therapy.

However, regulatory bodies are taking notice. The Federal Trade Commission (FTC) announced an investigation in April 2024 into several AI mental health apps for potential misleading claims about their efficacy. Officials cited instances where apps promised "cures" for conditions like PTSD without clinical backing. "Consumers deserve transparency about what these tools can and cannot do," FTC Chair Lina Khan said during a press conference in Washington, D.C., on April 10.

Internationally, similar trends are emerging. In the United Kingdom, the National Health Service (NHS) is piloting AI chatbots in partnership with Google to alleviate wait times for therapy, which can stretch up to six months. A pilot program launched in London last January has already served 50,000 users, with early feedback showing a 25 percent increase in self-reported well-being scores.

Back in the U.S., privacy concerns are mounting. A report from the Electronic Frontier Foundation (EFF) in February 2024 warned that many AI therapy apps collect sensitive data without robust safeguards. "Users pouring out their deepest fears to a bot might not realize that data could be used to train models or sold to third parties," said EFF senior attorney Jeremy Gillula. This has prompted calls for stricter data protection laws, similar to the European Union's GDPR.

Amid these developments, mental health advocates are pushing for integration rather than opposition. The National Alliance on Mental Illness (NAMI) hosted a webinar on May 5, 2024, featuring panels with AI developers and therapists. "AI should be a bridge to professional care, not a substitute," said NAMI CEO Daniel H. Gillison Jr. during the event, which drew over 10,000 virtual attendees.

The JAMA report also delved into demographic disparities. It found that low-income individuals and racial minorities are turning to AI at higher rates—up to 18 percent in some groups—due to barriers like cost and access to traditional services. In rural areas such as Appleton, Wisconsin, where therapist shortages are acute, local health officials report a 30 percent uptick in AI usage since 2022.

Looking ahead, researchers predict that by 2030, AI could handle up to 20 percent of initial mental health consultations. The World Health Organization (WHO) is developing global guidelines for ethical AI use in mental health, expected to be released later this year. "This is uncharted territory, but with careful stewardship, AI could transform how we address the global mental health crisis," WHO Director-General Dr. Tedros Adhanom Ghebreyesus said in a statement from Geneva on June 1, 2024.

As Americans navigate this digital frontier, stories like that of 35-year-old software engineer Raj Patel from Seattle illustrate the double-edged sword. Patel credits an AI app with helping him manage work-related stress during a layoff last fall. "It gave me practical steps when I felt lost," he said. But after a particularly dark episode, he sought human therapy, realizing AI's limits. "It's a great starting point, but you need the real thing for deeper issues," Patel added.

The intersection of technology and mental health continues to evolve, promising innovation while demanding vigilance. With millions already embracing AI as a confidant, the question remains: Will it bridge gaps in care or widen them? Experts agree that ongoing research and regulation will be key to ensuring this tool serves those who need it most.

Share: