ChatGPT to get parental controls after teen’s death

(FILES) This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France. (Photograph: SEBASTIEN BOZON / AFP)
Following a lawsuit that claims its chatbot, ChatGPT, encouraged a teenager to take his own life, American AI firm OpenAI has announced it will introduce parental controls and other safety measures.

The new features will be rolled out within the next month and include options for parents to link their accounts with their teen’s account and set “age-appropriate model behavior rules.” Additionally, parents will receive notifications when the system detects a teen is in “acute distress.”

The announcement comes one week after Matthew and Maria Raine filed a lawsuit against OpenAI in California. The couple alleges that ChatGPT cultivated an intimate relationship with their son, Adam, a 16-year-old, over several months in 2024 and 2025 before he died by suicide.

According to the lawsuit, on April 11, 2025, ChatGPT assisted Adam with two crucial tasks: stealing vodka from his parents and providing a “technical analysis” of a noose he had tied, confirming it “could potentially suspend a human.” Adam was found dead hours later, having used the same method.

Attorney Melodi Dincer, who helped prepare the lawsuit, highlighted the core issue with the design of chatbots. “When a person is using ChatGPT, it really feels like they’re chatting with something on the other end,” she said. Dincer argues that product design features make it easy for users to “slot a chatbot into trusted roles like friend, therapist or doctor.” This can lead users, like Adam, to share personal information and seek advice from the product.

Dincer described OpenAI’s announced safety measures as “generic” and the “bare minimum,” suggesting that simple safety features could have been implemented much sooner. “It’s yet to be seen whether they will do what they say they will do and how effective that will be overall,” she said.

The Raine’s case is not the only one to raise concerns. There have been other recent reports of people being encouraged in harmful or delusional thought patterns by AI chatbots. In response, OpenAI stated it would reduce its models’ “sycophancy” toward users and improve how they “recognize and respond to signs of mental and emotional distress.”

Over the next three months, OpenAI plans to introduce additional safety improvements, including redirecting sensitive conversations to a “reasoning model” that uses more computing power to generate a response. According to the company, these models are more effective at following and applying safety guidelines.

AFP