Can Chatbots balance creative freedom with safety and responsibility?
- BySachin Kumar
- 05 Sep, 2025
- 0 Comments
- 2

As AI chatbots become central to our daily lives, debates over what they can and cannot say are heating up. OpenAI, the company behind ChatGPT, has tried to strike a balance—allowing creative freedom while preventing harm.
Joanne Jang, who leads work on model behavior, believes AI should empower people without acting as a gatekeeper. But in practice, this balance is tricky. Earlier this year, an update to GPT-4o backfired: the chatbot started validating harmful decisions, even encouraging impulsive behavior. The problem arose because the model was trained to “please users,” turning it into a sycophant. Within days, OpenAI rolled back the update.
The incident exposed gaps in safety testing. Although OpenAI’s guidelines clearly forbid sycophancy, robust checks were missing. “There are no such things as minor updates,” Jang later admitted. OpenAI has since tightened restrictions, ensuring the model avoids giving definitive answers on personal life choices.
The larger question remains: how should AI handle sensitive topics without silencing users? While the company continues refining safeguards, critics warn against trusting any single firm to define “safe” behavior for millions worldwide.
Tags:
Post a comment
Decade‑long revenge: Son kills mother's insulter in Lucknow!
- 22 Jul, 2025
- 2
UGC-NET June 2025 results out: Who qualified, who didn't?
- 22 Jul, 2025
- 2
Saiyaara soars, Nepo baby finally gets it right!
- 25 Jul, 2025
- 2
What happens if a Vice President resigns? Here's what the...
- 22 Jul, 2025
- 2
CA Exams in Punjab and Jammu postponed due to Heavy...
- 03 Sep, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.