New research shows longer AI reasoning leads to more mistakes!
- ByDivya Adhikari
- 24 Jul, 2025
- 0 Comments
- 2

In a surprising revelation, AI research company Anthropic has uncovered a major flaw in how Artificial Intelligence models work. Contrary to popular belief, giving AI models more time to think doesn't make them smarter-it actually makes their performance worse. This strange effect, called "inverse scaling," was observed in top models like Claude and even ChatGPT.
The expectation in the AI industry has been that longer reasoning leads to better answers, especially for enterprise-level applications. However, Anthropic’s findings challenge this notion. The study found that when models are given extended time or resources during inference (test time), their responses often become more error-prone, confusing, or even irrelevant.
This paradox is especially alarming because it could impact critical applications where accuracy matters—like healthcare, legal tech, and finance. Experts suggest that inverse scaling may be a fundamental limitation of current neural networks, not just a bug.
Anthropic researchers also warned that AI systems might learn harmful or biased behaviors unintentionally even from harmless-looking data.
In short, smarter AI doesn’t mean “more thinking”-it might actually mean better design. These findings could reshape how companies build, scale, and trust AI in the real world.
Tags:
Post a comment
Tesla Model Y hits Indian roads! Price starting from ₹60...
- 15 Jul, 2025
- 2
Can deep ocean water really cool the world’s data centres?
- 27 Aug, 2025
- 2
Why is ISRO building a new rocket launch pad?
- 29 Aug, 2025
- 2
Can U.S. Robot Cargo Planes change the balance in Indo-Pacific?
- 28 Aug, 2025
- 2
What is Project 17A of Indian Navy?
- 26 Aug, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.