OpenAI reports 0.15% of ChatGPT conversations include Suicidal Intent!
- ByPrachi Sharma
- 29 Oct, 2025
- 0 Comments
- 2
In a recent blog post, OpenAI revealed that approximately 0.15 % of weekly active users of its chatbot ChatGPT engage in conversations that include “explicit indicators of potential suicidal planning or intent”. With the platform reportedly serving more than 800 million weekly users, that percentage translates into roughly 1.2 million users each week.
OpenAI said it has worked with more than 170 global mental‐health experts to refine ChatGPT’s responses in high-risk situations, and has reduced unsafe or non-compliant responses by between 65 % and 80 % in recent updates. The company emphasized that while the chatbot can offer supportive conversation, it is not a substitute for human-led mental-health care, and encourages users to seek professional help when needed.
Experts caution the data itself is preliminary, noting the difficulty of detecting suicidal intent in AI conversations and the overlap between emotional reliance on AI, self-harm ideation, and mental-health crises. As AI chatbots become part of many people’s daily lives, especially those in distress or isolation, the findings highlight the urgent need for robust safety mechanisms, clearer guidelines, and better integration with mental-health support systems.
Tags:
Post a comment
OpenAI inks Deal with Etsy, Shopify for ChatGPT "Instant Checkout!"
- 30 Sep, 2025
- 2
Odisha students build India’s first gamma-ray CubeSat!
- 20 Sep, 2025
- 2
Neon AI: Opera's new Browser acts on behalf of users!
- 30 Sep, 2025
- 2
Microsoft unveils "Vibe Working" with AI Agents in Office Apps!
- 30 Sep, 2025
- 2
Meta rolls out new Parent controls over Teens’ AI Chatbot...
- 18 Oct, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.

