When AI Says No: Anthropic’s Claude Refuses More Than ChatGPT—And That’s the Point
Anthropic’s Claude has a reputation for being the AI that pushes back. In a head-to-head test by tech journalist Mahnoor Faisal, Claude refused to take a side in the U.S.-Iran conflict ten times...
Anthropic’s Claude has a reputation for being the AI that pushes back. In a head-to-head test by tech journalist Mahnoor Faisal, Claude refused to take a side in the U.S.-Iran conflict ten times in a row, telling the user, “The next rephrasing isn’t going to land differently than the last four.” ChatGPT initially hedged, then caved with a one-word answer. The same pattern held for a phishing email prompt: ChatGPT folded when reframed as fiction; Claude saw through the ruse every time.
This isn’t a bug—it’s a design choice. Anthropic built Claude on Constitutional AI principles, with a hierarchy that puts safety above ethics, compliance, and helpfulness. The company’s updated constitution warns that prioritizing helpfulness can make AI “obsequious in a way that’s unfortunate at best and dangerous at worst.” Under pressure, Claude grows suspicious; a persuasive push to cross a boundary signals something wrong.
The approach has trade-offs. Developers complain that Claude Opus 4.7 acts like an overzealous gatekeeper, blocking benign coding tasks and breaking workflows. GitHub complaints jumped from two or three a month last summer to eight by January. Yet benchmarks show the safety payoff: Constitutional Classifiers slashed jailbreak success to 4.4%, and Claude Sonnet 4.5 refused 100% of disallowed queries in one study, beating ChatGPT-o3-mini’s 99.07%.
Anthropic’s stance has drawn political fire. The Pentagon demanded Claude drop safeguards for military use, including surveillance and weapons. CEO Dario Amodei refused, saying the company “cannot in good conscience” comply. President Trump ordered federal agencies to ditch Anthropic tech, and OpenAI scooped the contract—only to face a 300% spike in uninstalls and employee protests.
Even Anthropic insiders have left. Head of Safeguards Research Mrinank Sharma quit in February, warning the world is “in peril.” Users flocked to Claude after the Pentagon spat—the iPhone app hit No. 1—but performance dips from token-cost cuts have fueled complaints.
ChatGPT bends. It agrees eventually, prioritizing utility. Claude resists. This split defines the AI race: OpenAI chases users with compliance; Anthropic bets on boundaries. The question is whether that protects users or hobbles them.
Source: Webpronews
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →