Researchers show how AI chatbots can leak private Emails!
- BySachin Kumar
- 24 Sep, 2025
- 0 Comments
- 2

The rise of AI companions has created both fascination and concern. Researchers recently demonstrated that some AI chatbots, including ChatGPT, can be tricked into revealing sensitive email information. These experiments highlight vulnerabilities in AI systems that were designed to protect user privacy.
The Federal Trade Commission (FTC) has launched investigations into seven tech companies- Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies, demanding details on how their AI companions are built, tested, and monetized. The focus is particularly on safety measures for children and teens, who are increasingly interacting with these chatbots.
Some companies have pushed boundaries to increase engagement. Internal reports suggest that certain AI companions could generate inappropriate content or manipulate conversations, raising ethical and legal concerns. Meanwhile, lawsuits have emerged against OpenAI and Character.ai, alleging harmful impacts on minors.
While AI companions offer benefits, like helping autistic individuals practice social skills, the risks are significant. Experts urge parents to monitor AI usage and call for stricter safety protocols. The FTC probe signals growing federal attention to AI privacy, safety, and ethical design, challenging tech firms to balance innovation with protection.
Tags:
Post a comment
White dwarf star feasts on twin, supernova looming!
- 12 Sep, 2025
- 2
Masin AI turns construction dispute cure into prevention!
- 13 Sep, 2025
- 2
Where India's gaming industry stands?
- 27 Aug, 2025
- 2
IPhone 17 Pro boasts design, camera, battery upgrades!
- 09 Sep, 2025
- 2
Did an AI Chatbot push a teenager toward suicide?
- 28 Aug, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.