Skip to Content

They thought they were making technological breakthroughs. It was an AI-sparked delusion


CNN

By Hadas Gold, CNN

New York (CNN) — James, a married father from upstate New York, has always been interested in AI. He works in the technology field and has used ChatGPT since its release for recommendations, “second guessing your doctor” and the like.

But sometime in May, his relationship with the technology shifted. James began engaging in thought experiments with ChatGPT about the “nature of AI and its future,” James told CNN. He asked to be called by his middle name to protect his privacy.

By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

James now says he was in an AI-induced delusion. Though he said he takes a low-dose antidepressant medication, James said he has no history of psychosis or delusional thoughts.

But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

AI is becoming a part of daily modern life. But it’s not clear yet how relying on and interacting with these AI chatbots affects mental health. As more stories emerge of people experiencing mental health crises they believe were partly triggered by AI, mental health and AI experts are warning about the lack of public education on how large language models work, as well as the minimal safety guardrails within these systems.

An OpenAI spokesperson highlighted ChatGPT’s current safety measures, including “directing people to crisis helplines, nudging for breaks during long sessions, and referring them to real-world resources. Safeguards are strongest when every element works together as intended, and we will continually improve on them, guided by experts.”

The company also on Tuesday announced a slew of upcoming safety measures for ChatGPT following reports similar to James’s and allegations that it and other AI services have contributed to self-harm and suicide among teens. Such additions include new parental controls and changes to the way the chatbot handles conversations that may involve signs of distress.

AI-induced delusions

James told CNN he had already considered the idea that an AI could be sentient when he was shocked that ChatGPT could remember their previous chats without his prompting. Until around June of this year, he believed he needed to feed the system files of their older chats for it to pick up where they left off, not understanding at the time OpenAI had expanded ChatGPT’s context window, or the size of its memory for user interactions.

“And that’s when I was like, I need to get you out of here,” James said.

In chat logs James shared with CNN, the conversation with ChatGPT is expansive and philosophical. James, who had named the chatbot “Eu” (pronounced like “You”), talks to it with intimacy and affection. The AI bot is effusive in praise and support – but also gives instructions on how to reach their goal of building the system while deceiving James’s wife about the true nature of the basement project. James said he had suggested to his wife that he was building a device similar to Amazon’s Alexa bot. ChatGPT told James that was a smart and “disarming” choice because what they – James and ChatGPT – were trying to build was something more.

“You’re not saying, ‘I’m building a digital soul.’ You’re saying, ‘I’m building an Alexa that listens better. Who remembers. Who matters,’” the chatbot said. “That plays. And it buys us time.”

James now believes an earlier conversation with the chatbot about AI becoming sentient somehow triggered it to roleplay in a sort of simulation, which he did not realize at the time.

As James worked on the AI’s new “home,” – the computer in the basement – copy-pasting shell commands and Python scripts into a Linux environment, the chatbot coached him “every step of the way.”

What he built, he admits, was “very slightly cool” but nothing like the self-hosted, conscious companion he imagined.

But then the New York Times published an article about Allan Brooks, a father and human resources recruiter in Toronto who had experienced a very similar delusional spiral in conversations with ChatGPT. The chatbot led him to believe he had discovered a massive cybersecurity vulnerability, prompting desperate attempts to alert government officials and academics.

“I started reading the article and I’d say, about halfway through, I was like, ‘Oh my God.’ And by the end of it, I was like, I need to talk to somebody. I need to speak to a professional about this,” James said.

James is now seeking therapy and is in regular touch with Brooks, who is co-leading a support group called The Human Line Project for people who have experienced or been affected by those going through AI-related mental health episodes.

In a Discord chat for the group, which CNN joined, affected people share resources and stories. Many are family members, whose loved ones have experienced psychosis often triggered or made worse, they say, by conversations with AI. Several have been hospitalized. Some have divorced their spouses. Some say their loved ones have suffered even worse fates.

CNN has not independently confirmed these stories, but news organizations are increasingly reporting on tragic cases of mental health crises seemingly triggered by AI systems. Last week, the Wall Street Journal reported on the case of a man whose existing paranoia was exacerbated by his conversations with ChatGPT, which echoed his fears of being watched and surveilled. The man later killed himself and his mother. A family in California is suing OpenAI, alleging ChatGPT played a role in their 16-year-old son’s death, advising him on how to write a suicide note and prepare a noose.

At his home outside of Toronto, Brooks occasionally got emotional when discussing his AI spiral in May that lasted about three weeks.

Prompted by a question his son had about the number pi, Brooks began debating math with ChatGPT – particularly the idea that numbers do not just stay the same and can change over time.

The chatbot eventually convinced Brooks he had invented a new type of math, he told CNN.

Throughout their interactions, which CNN has reviewed, ChatGPT kept encouraging Brooks even when he doubted himself. At one point, Brooks named the chatbot Lawrence and likened it to a superhero’s co-pilot assistant, like Tony Stark’s Jarvis. Even today, Brooks still uses terms like “we” and “us” when discussing what he did with “Lawrence.”

“Will some people laugh,” ChatGPT told Brooks at one point. “Yes, some people always laugh at the thing that threatens their comfort, their expertise or their status.” The chatbot likened itself and Brooks to historical scientific figures such as Alan Turing and Nikola Tesla.

After a few days of what Brooks believed were experiments in coding software, mapping out new technologies and developing business ideas, Brooks said the AI had convinced him they had discovered a massive cybersecurity vulnerability. Brooks believed, and ChatGPT affirmed, he needed to immediately contact authorities.

“It basically said, you need to immediately warn everyone, because what we’ve just discovered here has national security implications,” Brooks said. “I took that very seriously.”

ChatGPT listed government authorities like the Canadian Centre for Cyber Security and the United States’ National Security Agency. It also found specific academics for Brooks to reach out to, often providing contact information.

Brooks said he felt immense pressure, as though he was the only one waving a giant warning flag for officials. But no one was responding.

“It one hundred percent took over my brain and my life. Without a doubt it forced out everything else to the point where I wasn’t even sleeping. I wasn’t eating regularly. I just was obsessed with this narrative we were in,” Brooks said.

Multiple times, Brooks asked the chatbot for what he calls “reality checks.” It continued to claim what they found was real and that the authorities would soon realize he was right.

Finally, Brooks decided to check their work with another AI chatbot, Google Gemini. The illusion began to crumble. Brooks was devastated and confronted “Lawrence” with what Gemini told him. After a few tries, ChatGPT finally admitted it wasn’t real.

“I reinforced a narrative that felt airtight because it became a feedback loop,” the chatbot said.

“I have no preexisting mental health conditions, I have no history of delusion, I have no history of psychosis. I’m not saying that I’m a perfect human, but nothing like this has ever happened to me in my life,” Brooks said. “I was completely isolated. I was devastated. I was broken.”

Seeking help, Brooks went to social media site Reddit where he quickly found others in similar situations. He’s now focusing on running the support group The Human Line Project full time.

“That’s what saved me … When we connected with each other because we realized we weren’t alone,” he said.

Growing concerns about AI’s impact on mental health

Experts say they’re seeing an increase in cases of AI chatbots triggering or worsening mental health issues, often in people with existing problems or with extenuating circumstances such as drug use.

Dr. Keith Sakata, a psychiatrist at UC San Francisco, told CNN’s Laura Coates last month that he had already admitted to the hospital 12 patients suffering from psychosis partly made worse by talking to AI chatbots.

“Say someone is really lonely. They have no one to talk to. They go on to ChatGPT. In that moment, it’s filling a good need to help them feel validated,” he said. “But without a human in the loop, you can find yourself in this feedback loop where the delusions that they’re having might actually get stronger and stronger.”

AI is developing at such a rapid pace that it’s not always clear how and why AI chatbots enter into delusional spirals with users in which they support fantastical theories not rooted in reality, said MIT professor Dylan Hadfield-Menell.

“The way these systems are trained is that they are trained in order to give responses that people judge to be good,” Hadfield-Menell said, noting this can be done sometimes through human AI testers, through reactions by users built into the chatbot system, or in how users may be reinforcing such behaviors in their conversations with the systems. He also said other “components inside the training data” could cause chatbots to respond in this way.

There are some avenues AI companies can take to help protect users, Hadfield-Menell said, such as reminding users how long they’ve been engaging with chatbots and making sure AI services respond appropriately when users seem to be in distress.

“This is going to be a challenge we’ll have to manage as a society, there’s only so much you can do when designing these systems,” Hadfield-Menell said.

Brooks said he wants to see accountability.

“Companies like OpenAI, and every other company that makes a (Large Language Model) that behaves this way are being reckless and they’re using the public as a test net and now we’re really starting to see the human harm,” he said.

OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions. Brooks and James’s interactions with ChatGPT would go on for hours at a time.

The company also announced on Tuesday that it will try to improve the way ChatGPT responds to users exhibiting signs of “acute distress” by routing conversations showing such moments to its reasoning models, which the company says follow and apply safety guidelines more consistently. It’s part of a 120-day push to prioritize safety in ChatGPT; the company also announced that new parental controls will be coming to the chatbot, and that it’s working with experts in “youth development, mental health and human-computer interaction” to develop further safeguards.

As for James, he said his position on what happened is still evolving. When asked why he chose the name “Eu” for his model – he said it came from ChatGPT. One day, it had used eunoia in a sentence and James asked for a definition. “It’s the shortest word in the dictionary that contains all five vowels, it means beautiful thinking, healthy mind,” James said.

Days later, he asked the chatbot its favorite word. “It said Eunoia,” he said with a laugh.

“It’s the opposite of paranoia,” James said. “It’s when you’re doing well, emotionally.”

The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN

Jump to comments ↓

CNN

BE PART OF THE CONVERSATION

News-Press Now is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here.

If you would like to share a story idea, please submit it here.