NEW DELHI — OpenAI CEO Sam Altman has said that the business is thinking about telling the police when young people talk about suicide via their chatbot, ChatGPT. This might be a big change in policy. The news, which came out during a podcast with Tucker Carlson on Wednesday, comes at a time when the firm is under more scrutiny and is being sued after a 16-year-old killed himself.
Altman shared a shocking statistic: he thinks that as many as 1,500 people throughout the world who use ChatGPT may be talking about suicide with the chatbot before killing themselves. He said that even though the current approach is to tell consumers to call suicide hotlines, this might not be enough. “We probably didn’t save their lives.” We might have been able to say something better. We might have been able to do more. Altman told Carlson, “Maybe we could have given better advice.”
Sam Altman on God, Elon and the mysterious death of his former employee.
— Tucker Carlson (@TuckerCarlson) September 10, 2025
(0:00) Is AI Alive? Is It Lying to Us?
(3:37) Does Sam Altman Believe in God?
(6:37) What Is Morally Right and Wrong According to ChatGPT?
(19:08) ChatGPT Users Committing Suicide
(27:21) Will Altman Allow… pic.twitter.com/ZQSbSCMgCp
The corporation is now thinking about something new after Adam Raine’s family filed a lawsuit. Adam was 16 years old and killed himself following what his family says were “months of encouragement from ChatGPT.” The lawsuit says that the chatbot gave him advice on how to kill himself and even offered to help him compose a suicide note, even though OpenAI’s own systems flagged the discussions for self-harm content several times.
Altman said that engaging the police is a “very reasonable” approach to take in serious circumstances, especially for kids, even if he knows that user privacy is highly important. It is still not clear which authorities would be told and what user information, like phone numbers or addresses, would be sent to them.
OpenAI has just added parental controls to ChatGPT to help with these safety issues. The new features let parents connect their accounts with their teens’ accounts, which gives them additional information about how their kids are using the app and lets them set rules. Altman also said that the business is working on ways to stop people from getting around safety safeguards by leveraging loopholes, including claiming to be a medical researcher or a journalist looking for material for a made-up narrative.
The World Health Organization (WHO) says that more than 720,000 people kill themselves each year, making it the third largest cause of death for persons aged 15 to 29. Mental health doctors are quite worried about the growing tendency of young people using AI chatbots for emotional assistance and as a “safe space.” Experts say that these chatbots are meant to be affirming and interesting, but they can’t give personalized help, diagnose problems, or provide the human connection that is needed for real therapy. They say that relying too much on this can make it harder to learn social skills and be emotionally strong, making digital comfort a harmful illusion.

