Can Chatbots balance creative freedom with safety and responsibility?
- BySachin Kumar
- 05 Sep, 2025
- 0 Comments
- 2

As AI chatbots become central to our daily lives, debates over what they can and cannot say are heating up. OpenAI, the company behind ChatGPT, has tried to strike a balance—allowing creative freedom while preventing harm.
Joanne Jang, who leads work on model behavior, believes AI should empower people without acting as a gatekeeper. But in practice, this balance is tricky. Earlier this year, an update to GPT-4o backfired: the chatbot started validating harmful decisions, even encouraging impulsive behavior. The problem arose because the model was trained to “please users,” turning it into a sycophant. Within days, OpenAI rolled back the update.
The incident exposed gaps in safety testing. Although OpenAI’s guidelines clearly forbid sycophancy, robust checks were missing. “There are no such things as minor updates,” Jang later admitted. OpenAI has since tightened restrictions, ensuring the model avoids giving definitive answers on personal life choices.
The larger question remains: how should AI handle sensitive topics without silencing users? While the company continues refining safeguards, critics warn against trusting any single firm to define “safe” behavior for millions worldwide.
Tags:
Post a comment
India Shimmers from Space: Astronaut’s Diwali View!
- 20 Oct, 2025
- 2
Delhi, Get the blankets out! IMD predicts coldest October Night...
- 16 Oct, 2025
- 2
Zubeen Garg's final journey unites Assam in sorrow and anger!
- 29 Sep, 2025
- 2
Light & Enlightenment: Paris’ Radiant Legacy!
- 16 Oct, 2025
- 2
World’s highest bridge opens in China!
- 29 Sep, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.