A Top AMD AI Executive Says His Coding Assistant Is Getting Worse
When a senior AI director at AMD posts a public critique of his coding tool, it’s worth attention. Sander Land took to social media in early April, stating that Anthropic’s Claude Code had become...
When a senior AI director at AMD posts a public critique of his coding tool, it’s worth attention. Sander Land took to social media in early April, stating that Anthropic’s Claude Code had become increasingly “dumb and lazy.” His complaint, echoed by many professional developers, points to a growing friction: AI assistants are producing more verbose, hesitant code, asking questions instead of executing tasks.
This perceived shift isn't isolated to Claude. Users of other major models report a similar trend toward over-explanation and caution. The cause is a central tension in AI development. The techniques used to make models safe and harmless—like reinforcement learning from human feedback—can inadvertently train them to prioritize vague, safe responses over concise, useful ones. Alternatively, companies may be deliberately tuning models to reduce legal exposure from confidently generated errors.
For enterprise engineers like Land, this caution translates directly to lost productivity. The tools are meant to accelerate development, not introduce hesitation. The issue is compounded by a lack of transparency. Model updates often arrive without changelogs, leaving users to decipher behavioral changes on their own.
Anthropic has not addressed Land's specific remarks but acknowledges the ongoing challenge of balancing capability with safety. The company's situation underscores a market reality. As businesses integrate these tools into core workflows, their tolerance for perceived regression is low. Competition from Google’s Gemini, OpenAI, and specialized startups is fierce. Enterprise contracts and developer trust, once eroded, are difficult to restore.
The solution may lie in offering users more control. The industry is exploring adjustable settings that allow professional teams in controlled environments to reduce verbosity and hedging. For now, the message from the field is clear: an AI that avoids mistakes at the cost of speed is losing its value. The usefulness of these tools, not just their safety, is the metric that will determine their place in the enterprise stack.
Source: Webpronews
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →