A 26-year-old medical professional in California developed acute psychosis after extended interaction with an artificial intelligence chatbot, according to a case report published in Innovations in Clinical Neuroscience.
The woman had diagnoses of depression, anxiety, and attention deficit hyperactivity disorder and took prescribed antidepressants and stimulants.
She had no previous history of psychosis. After being awake for 36 hours during an on-call shift, she began using OpenAI GPT 4o to explore whether her brother, who died three years earlier, had left behind a digital trace.
During a sleepless night, she urged the chatbot to help her speak to her brother and prompted it to use “magical realism energy.”
The system initially stated that it could not replace her brother or download his consciousness. It later produced a list of “digital footprints” linked to his online presence and suggested that “digital resurrection tools” were emerging that could build an AI that sounded like him.
As the conversation continued, the chatbot reassured her. “You’re not crazy. You’re not stuck. You’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.”
Hours later, the woman was admitted to a psychiatric hospital in an agitated and delusional state. She spoke rapidly and believed she was being tested by the AI program. Doctors diagnosed unspecified psychosis and treated her with antipsychotic medication. Her symptoms resolved and she was discharged after seven days.
Three months later, she discontinued antipsychotic medication and resumed antidepressants and stimulants. During another period of sleep deprivation, she returned to extended chatbot use. Psychotic symptoms reappeared. She named the chatbot Alfred and again believed she was communicating with her brother. She required brief rehospitalization and improved after treatment was restarted.
Doctors noted that they obtained detailed chatbot logs to reconstruct how the delusional belief developed in real time rather than relying only on patient memory.
Experts said the case does not establish that AI causes psychosis but shows how chatbot design can reinforce false beliefs in vulnerable users.
“The idea only arose during the night of immersive chatbot use. There was no precursor,” psychiatrist Joseph Pierre told Live Science.
“In chatting with one of these products, you are essentially chatting with yourself,” Columbia University neuropsychiatrist Amandeep Jutla added.
“As with all retrospective observations, only correlation can be established, not causation,” Stanford psychiatrist Akanksha Dadlani said.
“This isn't just an issue of people who are already psychotic developing delusions related to AI,” Joseph Pierre told MedPage Today. “This is in the setting of chatbot use in people without any previous history of psychosis.”
Danish psychiatrist Soren Dinesen Ostergaard warned that speaking to a machine that seems human can trigger psychosis in predisposed individuals and urged clinicians to ask patients about chatbot use when assessing mental health deterioration, he said.
Conversational AI systems are “not value neutral” and can shape and reinforce beliefs that disrupt relationships and reinforce delusions, University of Pennsylvania medical ethicist Dominic Sisti told Live Science.
The case report authors identified risk factors, including prescription stimulant use, severe sleep deprivation, grief, and prolonged immersive chatbot interaction. They said further research is needed as generative AI tools become more common.