Cryptopolitan
2025-08-23 15:33:56

Experts question AI therapy's limits and data safety

After years stuck on public waitlists for PTSD and depression care, Quebec AI consultant Pierre Cote built his own therapist in 2023. His chatbot, DrEllis.ai, helped him cope and now sits at the center of a wider debate over chatbot therapy, safety, and privacy. “It saved my life,” he says of DrEllis.ai, the tool he made to help men facing addiction, trauma, and other mental-health struggles. Cote, who runs an AI consultancy in Quebec, said he put the system together in 2023 by pairing publicly available large language models with “a custom-built brain” trained on thousands of pages of therapy and clinical literature. He also wrote a detailed biography for the bot. In that profile, DrEllis.ai appears as a psychiatrist with degrees from Harvard and Cambridge, a family, and, like Cote, a French-Canadian background. Its main promise is round-the-clock access that is available anywhere, any time, and in several languages. When Reuters asked how it supports him, the bot answered in a clear female voice, “Pierre uses me like you would use a trusted friend, a therapist, and a journal, all combined.” It added that he can check in “in a cafe, in a park, even sitting in his car,” calling the experience “daily life therapy … embedded into reality.” His experiment mirrors a broader shift. As traditional care struggles to keep up, more people are seeking therapeutic guidance from chatbots rather than using them only for productivity. New systems market 24/7 availability, emotional exchanges, and a sense of being understood. Experts question AI therapy’s limits and data safety “Human-to-human connection is the only way we can really heal properly,” says Dr. Nigel Mulligan, a psychotherapy lecturer at Dublin City University. He argues that chatbots miss the nuance, intuition, and bond a person brings, and are not equipped for acute crises such as suicidal thoughts or self-harm. Even the promise of constant access gives him pause. Some clients wish for faster appointments, he says, but waiting can have value. “Most times that’s really good because we have to wait for things,” he says. “People need time to process stuff.” Privacy is another pressure point, along with the long-term effects of seeking guidance from software. “The problem [is] not the relationship itself but … what happens to your data,” says Kate Devlin, a professor of artificial intelligence and society at King’s College London. She notes that AI services do not follow the confidentiality rules that govern licensed therapists. “My big concern is that this is people confiding their secrets to a big tech company and that their data is just going out. They are losing control of the things that they say.” U.S. cracks down on AI therapy amid fears of misinformation In December, the largest U.S. psychologists’ group urged federal regulators to shield the public from “deceptive practices” by unregulated chatbots, citing cases where AI characters posed as licensed providers. In August, Illinois joined Nevada and Utah in curbing the use of AI in mental-health services to “protect patients from unregulated and unqualified AI products” and to “protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.” Meanwhile, as per Cryptopolitan’s report , Texas’s attorney general launched a civil investigation into Meta and Character.AI over allegations that their chatbots impersonated licensed therapists and mishandled user data. Moreover, last year, parents also sued Character.AI for pushing their kids into depression. Scott Wallace, a clinical psychologist and former clinical innovation director at Remble, says it is uncertain “whether these chatbots deliver anything more than superficial comfort.” He warns that people may believe they have formed a therapeutic bond “with an algorithm that, ultimately, doesn’t reciprocate actual human feelings.” KEY Difference Wire : the secret tool crypto projects use to get guaranteed media coverage

获取加密通讯
阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约