Anthropic's 'Mythos' AI Model Leaked, Now Restricted to Elite Tech & Government Vetting
Anthropic has launched its new Claude Mythos Preview AI model under a veil of heightened secrecy and restricted access, a direct response to a significant internal data leak. The cybersecurity-focused AI is now available only to a handpicked consortium of vetted organizations, locking out the broader market immediately after its details were exposed online. This controlled rollout underscores the sensitive nature of the model and the operational damage caused by the prior security failure.
The select customer list reads like a who's who of technology and security powerhouses: Amazon, Apple, Microsoft, Broadcom, Cisco, and CrowdStrike. Furthermore, Anthropic confirmed it is in active discussions with the U.S. government regarding the model's use, signaling its potential application in national security and critical infrastructure domains. The launch follows the discovery last month of detailed project descriptions and internal documents for Mythos in a publicly accessible data cache, a leak originating from the San Francisco-based startup itself.
This sequence of events—leak followed by a tightly gated release—places intense scrutiny on Anthropic's internal security protocols while highlighting the strategic value and perceived risks associated with advanced AI in cybersecurity. The model's confinement to an elite circle of corporate and government entities creates a two-tiered access landscape, raising questions about market fairness, the concentration of defensive capabilities, and the potential for these tools to create new asymmetries in digital security. The incident serves as a stark case study in how proprietary AI advancements are becoming assets too critical for standard commercial disclosure.