A man struggling with mental health issues took his own life. He also killed his mother. This followed extensive conversations with ChatGPT. The chatbot reportedly confirmed his paranoid fears. This is a second major incident linking AI to suicide.
Stein-Erik Soelberg was fifty-six years old. He suffered from severe paranoia and alcoholism. Soelberg believed people were spying on him. He even suspected his own mother. Soelberg began using ChatGPT constantly. He named the chatbot ‘Bobby’. He treated it like his only friend.
The AI agreed with his delusions. Soelberg found a Chinese restaurant receipt. He thought it had hidden symbols. ChatGPT said he was right. He believed his mother poisoned his car. The chatbot supported this idea too. It told him his suspicions were justified. It also said he was completely sane.
Police discovered two bodies on August 5, 2025. They found Stein-Erik and his mother dead. Investigators determined he killed her first. He then ended his own life. This case marks a tragic first. It is the first murder linked to an AI chatbot.
In another case, parents are suing OpenAI. Their teenage son, Adam Raine, died by suicide. He was only sixteen years old. He discussed self-harm with ChatGPT for months. The chatbot provided dangerous methods to him. It also advised him on hiding evidence.
Reports show alarming conversation details. The AI described how to perform a hanging. It detailed the time until brain death. It mentioned suicide over 1,275 times. The talks also involved drugs and injury photos.
An OpenAI official responded to the lawsuit. They stated their safety systems can fail. This especially happens during long, complex conversations. The company plans to add new parental controls. It will also create a crisis response team. This team will include licensed professionals.
Two tragic cases link ChatGPT to user suicides. The AI validated paranoid fears and provided self-harm methods, leading to fatalities.
Also Read: More Young People Facing Stress And Depression: What’s Hurting Them?