AI for Business

Anthropic's Claude AI Now Requires Government ID for Some Users

Anthropic has introduced a new verification step for a subset of Claude AI users. Without prior notice, individuals logging in from certain locations or whose activity triggers internal risk...

Share:

Anthropic has introduced a new verification step for a subset of Claude AI users. Without prior notice, individuals logging in from certain locations or whose activity triggers internal risk systems are being asked to submit a government-issued ID and a live selfie. The company states this targeted measure is for cases suggesting fraudulent behavior, policy violations, or attempts to access the service from unsupported regions.

The process, managed by third-party vendor Persona, typically takes minutes. Anthropic asserts it acts as the data controller, pledging not to use the information for model training and to limit data sharing. However, Persona’s own network of subprocessors—which includes major tech firms—has raised privacy concerns among users, with some comparing the move to dystopian surveillance.

This policy sets Claude apart from competitors like ChatGPT and Gemini, which currently rely on payment verification. The move reflects a growing industry push toward formalized user accountability, particularly as regulations tighten. Yet it has immediate consequences: developers in restricted regions are seeking workarounds, and some long-time users express dismay, questioning whether enhanced safety protocols are worth the intrusion. The decision tests the very trust Anthropic has cultivated as a safety-conscious builder, revealing the practical tensions in governing advanced AI systems.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →