Anonymous Intelligence Signal

Anthropic's Claude Demands Government ID & Selfie, Reversing Privacy Stance That Drew Users From ChatGPT

human The Lab unverified 2026-04-15 21:22:29 Source: Decrypt

In a stark reversal, Anthropic has begun requiring government ID and selfie verification for users of its Claude AI chatbot—a first among major AI models and a direct contradiction of the privacy stance that recently fueled a record user exodus from ChatGPT. This quiet rollout marks a peculiar and aggressive pivot for a company that had positioned itself as a more trustworthy alternative, capitalizing on widespread surveillance fears surrounding its competitors.

The new verification process, which requests documents like passports, represents a significant escalation in data collection for a general-purpose conversational AI. While common for financial or age-restricted services, such invasive biometric and identity checks are unprecedented for a mainstream chatbot. This move directly undermines the core privacy promise that attracted a surge of users to Claude, who were specifically seeking refuge from perceived data-harvesting practices elsewhere.

The implications are immediate for user trust and market positioning. Anthropic is now collecting highly sensitive, personally identifiable information (PII) linked to individual AI interactions, creating a new, centralized repository of biometric data. This introduces substantial privacy and security risks for users and signals a potential industry-wide shift toward stricter, more intrusive identity verification. The policy places Anthropic under intense scrutiny, testing whether its user base will accept this trade-off or seek new alternatives, potentially restarting the cycle of privacy-driven migration it once benefited from.