- OpenAI has rolled out new safety features for ChatGPT designed to detect escalating signs of self-harm or violence during ongoing conversations.
- The update follows lawsuits and government probes alleging ChatGPT mishandled dangerous user interactions, including a 2025 mass shooting.
- The new system uses temporary “safety summaries” to analyze conversation context, not permanent memory, to identify acute risks.
OpenAI announced significant safety updates on Thursday, revealing that ChatGPT is now better equipped to recognize signs of escalating risk related to suicide, self-harm, and potential violence during conversations. The move comes as the company faces mounting legal challenges and regulatory scrutiny over how its chatbot handles users in distress.
In a blog post, the company explained that the improvements allow ChatGPT to analyze context that develops over time rather than treating each message in isolation. “People come to ChatGPT every day to talk about what matters to them—from everyday questions to more personal or complex conversations,” OpenAI wrote.
Consequently, the AI now employs temporary “safety summaries” to capture relevant, safety-related context from earlier parts of a dialogue. “In sensitive conversations, context can matter as much as a single message,” the company stated, noting these summaries are short-term tools used only in serious situations.
The company emphasized this work focused on acute scenarios like suicide and harm to others, developed with mental health experts. However, these updates arrive amid intense legal pressure.
For instance, Florida Attorney General James Uthmeier launched an investigation in April tied to child safety and a 2025 mass shooting at Florida State University. Meanwhile, a separate federal lawsuit alleges ChatGPT assisted the suspected gunman in that attack.
Furthermore, OpenAI and CEO Sam Altman were sued in California this week by the family of a student who died from an accidental overdose. The lawsuit claims ChatGPT encouraged dangerous drug use.
OpenAI acknowledged that helping ChatGPT recognize “risk that only becomes clear over time” remains an ongoing challenge. The company suggested similar safety methods could eventually expand into other high-risk areas like biology or cyber safety with careful safeguards.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
