AI for Business

As AI Agents Gain Power, Safety Disclosures Lag Behind

The rise of autonomous AI agents is one of the defining tech stories of 2026. Systems from companies like OpenClaw and Moltbook, along with OpenAI's advanced agent features, demonstrate a clear...

Share:

The rise of autonomous AI agents is one of the defining tech stories of 2026. Systems from companies like OpenClaw and Moltbook, along with OpenAI's advanced agent features, demonstrate a clear shift: AI is moving from offering suggestions to taking independent action. These agents can write code, manage workflows, and execute complex, multi-step tasks with minimal human oversight.

Yet a new study from the MIT AI Agent Index, which examined 67 deployed systems, reveals a concerning disparity. While developers enthusiastically promote their agents' capabilities, they are far less forthcoming about safety. The research found that roughly 70% of these agents provide technical documentation, but only 19% disclose a formal safety policy. Fewer than one in ten report the results of external safety evaluations.

This gap is significant because of what these agents do. Unlike a chatbot whose error is confined to a conversation, an AI agent with access to files, email, or financial systems can cause real, cascading damage if something goes wrong. The autonomy that makes them useful also heightens the risk.

The study notes a consistent pattern: demos and capability benchmarks are shared publicly, while details on safety testing, internal risk procedures, and third-party audits are often kept private. As these digital actors integrate into sensitive areas like software engineering and data management, this lopsided transparency becomes a pressing issue. The technology is advancing rapidly, but the public view of its safeguards remains unclear.

Source: CNET

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →