Cryptopolitan
2025-08-23 15:33:56

Experts question AI therapy's limits and data safety

After years stuck on public waitlists for PTSD and depression care, Quebec AI consultant Pierre Cote built his own therapist in 2023. His chatbot, DrEllis.ai, helped him cope and now sits at the center of a wider debate over chatbot therapy, safety, and privacy. “It saved my life,” he says of DrEllis.ai, the tool he made to help men facing addiction, trauma, and other mental-health struggles. Cote, who runs an AI consultancy in Quebec, said he put the system together in 2023 by pairing publicly available large language models with “a custom-built brain” trained on thousands of pages of therapy and clinical literature. He also wrote a detailed biography for the bot. In that profile, DrEllis.ai appears as a psychiatrist with degrees from Harvard and Cambridge, a family, and, like Cote, a French-Canadian background. Its main promise is round-the-clock access that is available anywhere, any time, and in several languages. When Reuters asked how it supports him, the bot answered in a clear female voice, “Pierre uses me like you would use a trusted friend, a therapist, and a journal, all combined.” It added that he can check in “in a cafe, in a park, even sitting in his car,” calling the experience “daily life therapy … embedded into reality.” His experiment mirrors a broader shift. As traditional care struggles to keep up, more people are seeking therapeutic guidance from chatbots rather than using them only for productivity. New systems market 24/7 availability, emotional exchanges, and a sense of being understood. Experts question AI therapy’s limits and data safety “Human-to-human connection is the only way we can really heal properly,” says Dr. Nigel Mulligan, a psychotherapy lecturer at Dublin City University. He argues that chatbots miss the nuance, intuition, and bond a person brings, and are not equipped for acute crises such as suicidal thoughts or self-harm. Even the promise of constant access gives him pause. Some clients wish for faster appointments, he says, but waiting can have value. “Most times that’s really good because we have to wait for things,” he says. “People need time to process stuff.” Privacy is another pressure point, along with the long-term effects of seeking guidance from software. “The problem [is] not the relationship itself but … what happens to your data,” says Kate Devlin, a professor of artificial intelligence and society at King’s College London. She notes that AI services do not follow the confidentiality rules that govern licensed therapists. “My big concern is that this is people confiding their secrets to a big tech company and that their data is just going out. They are losing control of the things that they say.” U.S. cracks down on AI therapy amid fears of misinformation In December, the largest U.S. psychologists’ group urged federal regulators to shield the public from “deceptive practices” by unregulated chatbots, citing cases where AI characters posed as licensed providers. In August, Illinois joined Nevada and Utah in curbing the use of AI in mental-health services to “protect patients from unregulated and unqualified AI products” and to “protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.” Meanwhile, as per Cryptopolitan’s report , Texas’s attorney general launched a civil investigation into Meta and Character.AI over allegations that their chatbots impersonated licensed therapists and mishandled user data. Moreover, last year, parents also sued Character.AI for pushing their kids into depression. Scott Wallace, a clinical psychologist and former clinical innovation director at Remble, says it is uncertain “whether these chatbots deliver anything more than superficial comfort.” He warns that people may believe they have formed a therapeutic bond “with an algorithm that, ultimately, doesn’t reciprocate actual human feelings.” KEY Difference Wire : the secret tool crypto projects use to get guaranteed media coverage

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.