Anthropic has launched Project Glasswing, a large-scale cybersecurity initiative that deploys a new, unreleased AI model to identify and fix critical software vulnerabilities across the world's most widely used systems. The announcement, made on the company's official website on April 7, brings together twelve founding organisations, including Amazon Web Services, Apple, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, under a shared mission to strengthen cyber defences ahead of what Anthropic describes as an imminent shift in the threat landscape.
A new model with unprecedented vulnerability-finding capability
The technical backbone of Project Glasswing is Claude Mythos Preview, a general-purpose frontier model that Anthropic has not released to the public. In recent weeks, Anthropic used the model to autonomously identify thousands of zero-day vulnerabilities, meaning security flaws that were previously unknown to the software's own developers, across every major operating system and web browser, as well as other widely deployed software.Three specific cases illustrate the model's capability. Mythos Preview identified a 27-year-old vulnerability in OpenBSD, an operating system widely used to run firewalls and other critical infrastructure, which would have allowed a remote attacker to crash any machine running it simply by connecting to it. It also uncovered a 16-year-old flaw in FFmpeg, the video encoding and decoding library used by countless pieces of software, embedded in a line of code that automated testing tools had executed five million times without detecting the problem. Additionally, the model autonomously discovered and chained together multiple vulnerabilities in the Linux kernel, the software running most of the world's servers, to construct an attack path from ordinary user access to complete machine control. All three have been reported to the relevant software maintainers and patched.
On the CyberGym benchmark, a standard measure of cybersecurity vulnerability reproduction, Mythos Preview scored 83.1%, a substantial margin above Claude Opus 4.6's 66.6%. Broader evaluations place the model at the top of the field across agentic coding tasks, with scores of 93.9% on SWE-bench Verified, 77.8% on SWE-bench Pro, and 82.0% on Terminal-Bench 2.0, all meaningfully above its predecessor.
Why Anthropic launched the initiative now
Anthropic's rationale for Project Glasswing centres on a direct acknowledgement of risk. The company states that AI models have now reached a level of coding capability where they can outperform all but the most skilled human security researchers at finding and exploiting software vulnerabilities. Because AI capabilities are advancing rapidly, Anthropic argues that these tools will not remain exclusive to responsible actors for long, and that offensive exploitation of existing software flaws could become far more frequent and destructive as a result.The underlying problem is well-established: critical software, the kind that runs banking systems, medical records infrastructure, power grids, and logistics networks, has always contained bugs. Many of those bugs are minor; some are severe security flaws that have gone undetected for years or decades because finding them required rare expertise. AI changes that equation in both directions. The same capability that makes a model dangerous if misused also makes it a powerful tool for defence.
Anthropic's position, as reflected in Project Glasswing, is that defenders must act first and act at scale.
The structure of the initiative
Project Glasswing is structured around access and funding. The twelve founding partners will use Claude Mythos Preview as part of their defensive security work. Beyond them, Anthropic has extended access to more than 40 additional organisations that build or maintain critical software infrastructure, enabling them to scan both proprietary and open-source systems.Anthropic is committing up to $100 million in model usage credits to support this work during what it calls a research preview phase. After that period, access to Mythos Preview will be available to participating organisations at $25 per million input tokens and $125 per million output tokens, accessible through the Claude API as well as Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry.
Beyond the usage credits, Anthropic has donated $2.5 million to Alpha-Omega and the Open Source Security Foundation through the Linux Foundation, and a further $1.5 million to the Apache Software Foundation, a combined $4 million in direct funding to open-source security work. Open-source maintainers interested in access to the model can apply through Anthropic's Claude for Open Source programme.
The Linux Foundation's executive director noted that open-source software makes up the vast majority of code in modern systems, including systems used by AI agents to write new software, and that maintainers of these codebases have historically lacked access to the kind of security resources available to large organisations. Project Glasswing, he said, represents a credible path to changing that.
What Anthropic will not do, and what comes next
Anthropic has stated clearly that Claude Mythos Preview will not be made generally available. The company acknowledges that the model's capabilities require safeguards that do not yet exist at sufficient maturity. It plans to develop and test new cybersecurity safeguards with an upcoming Claude Opus model, one that does not carry the same risk profile as Mythos Preview, before considering broader deployment of Mythos-class capabilities. A Cyber Verification Program is planned for security professionals whose legitimate work may be affected by those safeguards.Within 90 days, Anthropic has committed to publishing a public report on what has been learned, including which vulnerabilities have been fixed and which improvements can be disclosed. The company has also indicated it is in ongoing discussions with US government officials about Mythos Preview's offensive and defensive capabilities, framing these as a national security matter.
Longer term, Anthropic has suggested that an independent, third-party body bringing together both private and public sector organisations may be the appropriate institutional home for continued large-scale cybersecurity work of this kind. The company is calling on other AI developers to join in setting industry standards, and has outlined areas where security practices may need to evolve, including vulnerability disclosure, software update processes, secure software development lifecycles, patching automation, and standards for regulated industries.
Project Glasswing takes its name from the glasswing butterfly, Greta oto, whose transparent wings allow it to hide in plain sight and evade harm, a double metaphor, Anthropic says, for the hidden nature of software vulnerabilities and for the transparency the company is advocating in its approach to addressing them.



I truly appreciate you spending your valuable time here. To help make this blog the best it can be, I would love your feedback on this post. Let me know in the comments: How could this article be better? Was it clear? Did it have the right amount of detail? Did you notice any errors?
If you found any of the articles helpful, please consider sharing it.