Anthropic's Mythos AI Model Accessed by Unauthorized Users
Anthropic's highly restricted Mythos AI model has been accessed by a small group of unauthorized users, according to a Bloomberg report. The incident involves one of the company's most powerful and potentially dangerous AI systems, which was designed specifically for cybersecurity applications.
The Security Incident
The unauthorized access occurred on April 7th, 2026 — the same day Anthropic publicly announced it was releasing Mythos to a limited set of companies for testing. Members of a private online forum gained entry to the system using a combination of tactics, including credentials from a third-party contractor who worked with Anthropic and publicly available internet research tools.
The contractor, who spoke anonymously to Bloomberg, revealed that the group leveraged information from a recent Mercor data breach to make "an educated guess" about Mythos's online location. By understanding the format of Anthropic's other AI models, they were able to deduce where the restricted system could be accessed.
What is Mythos?
Claude Mythos Preview represents one of Anthropic's most advanced and sensitive AI systems. The model possesses the capability to identify and exploit security vulnerabilities across every major operating system and web browser when directed by users. Given these powerful capabilities, Anthropic has intentionally restricted access and currently has no plans for a public release.
Official access to Mythos is limited to select organizations through Project Glasswing, Anthropic's cybersecurity initiative. Current authorized users include major technology companies such as:
- Nvidia
- Amazon Web Services
- Apple
- Microsoft
Government agencies have also expressed interest in accessing the technology for national security purposes.
Extent of the Breach
According to Bloomberg's investigation, the unauthorized group has been using Mythos regularly since gaining access. Members provided the publication with screenshots and a live demonstration as evidence of their access. However, they reportedly avoided using the model for actual cybersecurity testing to minimize the risk of detection by Anthropic.
The group operates within a Discord channel dedicated to discovering information about unreleased AI models. Beyond Mythos, Bloomberg reports that the group has also accessed other unreleased Anthropic AI systems.
Anthropic's Response
An Anthropic spokesperson acknowledged the incident in a statement to Bloomberg:
"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments."
The company emphasized that current evidence suggests the breach is contained to the third-party vendor's environment and has not impacted Anthropic's core systems. However, the investigation remains ongoing.
Broader Implications
This incident highlights the security challenges facing AI companies as they develop increasingly powerful systems. The breach occurred despite Anthropic's careful access controls, demonstrating that even restricted AI models can be vulnerable when third-party contractors and vendors are involved in the deployment chain.
The incident also raises questions about the balance between controlled testing of powerful AI systems and the operational security needed to keep them from unauthorized access. As AI capabilities continue to advance, particularly in sensitive domains like cybersecurity, the industry will need to develop more robust security frameworks for managing access to potentially dangerous models.
The timing of the breach — on the very day of Mythos's limited release announcement — suggests that threat actors are actively monitoring AI company announcements and moving quickly to exploit any potential access vectors.