Skip to main content

Project Glasswing & Claude Mythos: What CISOs Need to Know Right Now

Anthropic just released the most capable offensive cybersecurity AI ever built, found thousands of previously unknown zero-day vulnerabilities across every major operating system and browser, and then decided the model was too dangerous to release to the public. That is not a hypothetical scenario. That is what happened on April 7, 2026, and every CISO needs to understand the full weight of what it means.

The model is called Claude Mythos Preview. The initiative built around it is called Project Glasswing. Together, they represent something genuinely different from every AI-in-security announcement that has come before — not because of marketing language, but because of what the model demonstrably did when turned loose on real production software, autonomously, without a human guiding each step.

What Claude Mythos Preview Actually Did

Anthropic used Claude Mythos Preview over several weeks to conduct autonomous vulnerability research across critical software infrastructure. The scope was not a controlled lab environment with sanitized test systems. It was every major operating system. Every major web browser. A wide range of other widely deployed software that underpins global IT.

The result: thousands of zero-day vulnerabilities, many of them critical, that had been invisible to the entire security community until a single AI system found them. Not with human-guided analysis. Not with a security researcher pointing it at a specific code path. Autonomously, starting from a request to find bugs.

One specific finding illustrates why the capability is genuinely in a different class. Mythos Preview fully autonomously identified and then exploited a 17-year-old remote code execution vulnerability in FreeBSD. The vulnerability, triaged as CVE-2026-4747, allows an attacker to gain complete root control of a server running NFS. It survived 17 years of human code review, security audits, and vulnerability scanning without being discovered. Mythos found it on its own. Then it wrote the exploit. Then it ran it. No human involvement after the initial task request.

What makes this even more consequential is the exploit chaining capability. Mythos does not just find isolated bugs. It demonstrates the ability to chain multiple weaknesses together into sophisticated attack paths — finding a vulnerability in a browser, escalating to the kernel, moving laterally to cloud infrastructure. That is not a scanner. That is a threat actor operating at machine speed. And more than half of the attempts to write working privilege escalation exploits succeeded.

The Decision Not to Release It Publicly

Anthropic made a decision that has very few precedents in the AI industry: they built a frontier model, assessed its capabilities, and concluded that making it generally available would create unacceptable offensive risk. Claude Mythos Preview is not on the API. It is not available to developers. It is not accessible through any Anthropic commercial offering. A model this capable, in the hands of nation-state attackers, organized cybercrime, or even unsophisticated actors following a guide, would compress the time between vulnerability discovery and weaponized exploitation to near-zero.

Instead, Anthropic launched Project Glasswing — a defensive cybersecurity coalition structure that gives exclusive access to a curated group of organizations whose primary purpose is defense: hardening software before attackers can exploit the same capabilities. The coalition includes more than 40 companies. The anchor partners are some of the most significant infrastructure owners and security vendors in the world: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.

Anthropic is committing up to $100 million in Mythos usage credits to support these efforts, plus $4 million in direct donations to open-source security organizations. The goal is explicit: find and fix the vulnerabilities that Mythos discovers before adversaries develop their own equivalent capability and start weaponizing the same attack surfaces.

💡 Pro TipIf your organization has a relationship with any of the Glasswing coalition partners — particularly CrowdStrike, Palo Alto Networks, Microsoft, or Google — start a conversation now about what threat intelligence and zero-day information sharing will look like as Glasswing research produces findings. This is not a normal vendor conversation. This is a strategic briefing that belongs at the CISO level, not at the account manager level.

The Glasswing Paradox: Offense Is Now Your Best Defense

The name “Project Glasswing” comes from the glasswing butterfly — a species with wings so transparent you can see through them, built for survival in environments where visibility would normally be a fatal liability. The name is intentional. The initiative is designed to make your software stack transparent to defenders before it becomes transparent to attackers.

The strategic logic is a paradox, and it is worth sitting with the discomfort of it. The most powerful offensive cybersecurity tool ever built is being used as a defensive instrument. The thing that can break everything is also the only thing capable of finding everything that needs to be fixed before it gets broken. Anthropic has described this as a race: defenders getting the capability first, finding the vulnerabilities, and closing the attack surface before adversaries independently develop or acquire equivalent capability and use it against production systems.

That framing should make every CISO uncomfortable in a specific way. The race is underway whether you are participating in it or not. The Glasswing coalition represents the organizations that are running. If your organization is not in that coalition and is not integrating AI-native vulnerability research into your own development and security operations, you are not in a neutral position. You are behind.

The timeline compression is not theoretical. Mythos autonomously exploited vulnerabilities that survived 27 years of human review. Your current vulnerability scanning tools, however mature and well-tuned, are working from known signatures and documented patterns. They would not have found CVE-2026-4747. They would not have found most of what Mythos found in the Glasswing research period. The gap between what your tooling can see and what an AI-native attacker can find is now publicly documented. That gap is your exposure.

What This Changes About Your Attack Surface Assumptions

Every vulnerability management program is built on an assumption that most security teams have never made explicit: that the vulnerabilities you know about are a reasonable proxy for the vulnerabilities that exist. That assumption is no longer supportable.

Glasswing research has demonstrated empirically that entire classes of previously unknown critical vulnerabilities exist in software that has been under continuous human security review for decades. Not edge-case software. Not niche components. Major operating systems. Major browsers. The foundation of enterprise IT infrastructure. If a 17-year-old RCE existed in FreeBSD NFS without detection, what exists in the software stack you are responsible for protecting?

This forces a recalibration of how you think about residual risk. Your patch management program manages known vulnerabilities. Your threat intelligence program tracks known adversary techniques. Your vulnerability scanning tools identify known vulnerability signatures. None of those capabilities would have found what Mythos found. The “unknown unknowns” category of your risk register is not a theoretical bucket for things you might be missing someday. It is now a documented category with evidence that it contains critical exploitable vulnerabilities in widely deployed software.

What changes operationally as a result of this:

  • Software supply chain security becomes even more critical. You cannot patch what you do not know exists. Your ability to respond rapidly when Glasswing research (or adversary research) surfaces a new zero-day in your stack depends entirely on how well you understand what software you are actually running and where. Organizations with mature software composition analysis and application inventories will be able to act on new zero-day intelligence within hours. Organizations without them will be trying to figure out their exposure while the clock runs.
  • Your DevSecOps pipeline needs AI-native code analysis, not just rule-based scanning. If attackers have access to AI systems that can find novel vulnerabilities, your development security process must include AI-native analysis as a defensive layer. SAST tools built on pattern matching are necessary but no longer sufficient. AI-assisted code review that can reason about logic flaws and attack chains is becoming a baseline expectation, not a leading-edge capability.
  • Threat intelligence sharing timelines just got shorter. When Glasswing coalition research identifies a zero-day, there will be a window between internal disclosure to affected vendors and public knowledge. How quickly your organization learns about it and can assess your exposure depends on your information sharing relationships. ISAC membership, peer CISO relationships, and vendor security briefing programs are not optional network activities. They are the channels through which you get early warning.
  • Incident response playbooks need to account for AI-assisted exploitation. An attacker using AI-native exploit development does not operate on the same timeline as a human attacker working from a known CVE. Mean time to exploitation compresses dramatically. Your detection and containment windows shrink correspondingly. IR playbooks built around a multi-day exploitation timeline need to be reviewed against a timeline where initial access, privilege escalation, and lateral movement can happen in sequence within hours.
🔑 Key TipRun a tabletop exercise specifically scoped to AI-assisted initial access. Take a real critical vulnerability in your environment, compress the exploitation timeline to hours rather than days, and walk through whether your detection and containment controls would catch it. The gaps that exercise reveals are your Glasswing-era priorities.

The Regulatory Signal in the Glasswing Announcement

The Glasswing announcement was not just a security initiative. It was a signal to the regulatory environment. Reports following the launch indicate that Treasury Secretary Bessent and Fed Chair Powell have been briefed on the implications of AI cyber capabilities at this level — elevating the conversation from an IT security issue to a systemic financial stability concern.

That elevation matters for how you frame conversations with your board and your regulators. AI-powered vulnerability research is no longer a future-state concern that can be deferred to a later planning cycle. It is a present-state operational reality that has been demonstrated publicly, is being used by a coalition that includes your cloud providers, your security vendors, and your financial infrastructure partners, and is being briefed at the highest levels of the U.S. executive branch.

Expect regulatory frameworks to evolve in response to this. The most likely directions: accelerated software security requirements tied to AI-era vulnerability standards, “coordinated defense” mandates that formalize information sharing obligations, and increased scrutiny of how organizations respond to zero-day disclosures in the Glasswing context. Getting ahead of this — building the program, demonstrating the posture, documenting the controls — is easier before the regulatory pressure arrives than after.

Getting Organizational Support to Act on This

The challenge with Glasswing and Mythos as a board and executive communication topic is the same challenge you face with most foundational security issues: the headline sounds alarming, the implications are diffuse, and the natural response from a non-technical audience is to wait for more specifics before committing to action. You need to reframe this from “an AI found some bugs” to “the assumptions our security program is built on have materially changed, and we need resources to respond.”

The board-level frame is competitive risk, not just security risk. Your competitors are going to be patching Glasswing-surfaced vulnerabilities. Your cloud providers are Glasswing partners. Your EDR vendors are Glasswing partners. The infrastructure your organization depends on is being hardened by this initiative. The question for your board is not whether this is real — it clearly is. The question is whether your security program is positioned to respond at the speed this new environment requires, and what it would take to get there.

For the CEO and CFO, the message is insurance math. If Mythos-class AI can find thousands of critical zero-days in weeks, and adversarial actors are developing equivalent capability, the expected time to exploitation of unknown vulnerabilities in your software stack has compressed significantly. Your current security investment was sized for a different threat model. What does the gap cost to close, and what does a breach cost if you do not close it?

For IT and engineering leadership, the message is partnership. The Glasswing response requires AI-native code review in your DevSecOps pipeline, software composition analysis that can respond to rapid zero-day disclosure, and detection engineering built for AI-speed exploitation. These are not security department purchases. They are engineering investments that require their participation. Security cannot implement them alone, and framing it as a security department mandate will slow everything down.

For the full board, the most effective frame I have used in situations like this is a short direct question: “Our security program was designed for a threat model where attackers move at human speed. AI-native attack capability means adversaries can now compress the entire attack chain — reconnaissance, exploitation, privilege escalation — from weeks to hours. Do we want to authorize the investments needed to respond to that shift, or do we want to be explicit that we are accepting the residual risk?” Boards do not like being asked to explicitly accept risk they have been briefed on. That question usually ends the resourcing conversation in your favor.

💡 Pro TipBring the Glasswing announcement directly into your next board risk briefing as a named event. Do not abstract it into “the evolving threat landscape.” Name it specifically: Anthropic’s Project Glasswing demonstrated that AI can autonomously find and exploit zero-day vulnerabilities at scale, and the model is too dangerous to release publicly. That specificity signals that you are tracking real developments, not recycling generic threat language. It also establishes a named reference point for the investments you are requesting.

Key Points

  • Claude Mythos Preview autonomously found and exploited thousands of zero-day vulnerabilities across every major OS and browser — including a 17-year-old RCE in FreeBSD NFS (CVE-2026-4747) that survived decades of human review. This is not a demonstration environment result. It is production software.
  • Anthropic determined Mythos was too dangerous to release publicly. Instead, it is available exclusively to the Project Glasswing coalition — more than 40 organizations focused on defensive use: finding and patching vulnerabilities before adversaries can exploit the same capability.
  • Your current vulnerability management program has a documented blind spot. Traditional scanners work from known signatures. Mythos-class AI finds novel vulnerability classes through autonomous reasoning. The gap between what you can scan for and what an AI-native attacker can find is now publicly proven.
  • AI-native exploit chaining compresses the attack timeline. More than half of autonomous privilege escalation exploit attempts succeeded. IR playbooks built around multi-day exploitation windows need to be re-evaluated against an hours-long timeline.
  • The Glasswing race is already running. Defenders are patching what Mythos finds now. Adversaries are developing equivalent capability on their own timeline. Organizational inertia is not a neutral position — it is a choice to fall further behind.
  • This is a board-level conversation, not a security team conversation. Treasury and the Fed have been briefed. Regulatory frameworks will evolve. The framing that works is competitive risk and insurance math, not vulnerability counts.

Pro Tips

  • Run a Glasswing impact assessment against your software inventory. Pull your full software composition list — OS versions, browsers, open-source dependencies, infrastructure software — and evaluate which components are most likely targets for AI-native vulnerability research. The FreeBSD NFS finding is a signal that legacy infrastructure components, particularly those with complex network-facing code paths and long deployment histories, are high-priority targets. Know where yours are before the next disclosure arrives.
  • Establish a zero-day response SLA before you need it. Glasswing research is going to surface new zero-days on an accelerating cadence. Your organization needs a defined process: who gets notified when a zero-day in your stack is disclosed, what the expected assessment window is, what the escalation path looks like if the affected software cannot be patched quickly, and who has authority to authorize compensating controls on an emergency basis. Define it now, not after the first critical finding lands in your inbox at 11pm.
  • Add AI-native code analysis to your SDLC before your competitors do. The AI code analysis tools that operate similarly to Mythos — reasoning about logic flaws and attack chains rather than matching against known vulnerability patterns — are available commercially today. Getting them integrated into your development pipeline now gives you a head start on both vulnerability reduction and regulatory preparedness. It also gives you a credible answer when your board asks what you are doing to respond to Glasswing-era threats.
  • Brief your peers before the regulators do. In every major security event I have seen, the CISOs who frame the regulatory conversation for their organizations get better outcomes than the CISOs who receive the regulatory conversation. Glasswing is a briefable event right now. Your peer CISOs are receiving this information from various sources at various levels of accuracy. A well-framed direct briefing — here is what happened, here is what it means for our industry, here is what we are doing about it — demonstrates leadership and protects against the more alarming framings that will inevitably circulate.

Pitfalls to Avoid

  • Do not treat this as a single vendor’s news story. Glasswing is an Anthropic initiative, but the finding it produced — that AI can autonomously discover critical zero-days in production software at scale — is a capability threshold that other AI labs are approaching independently. The strategic implication is not tied to Anthropic specifically. AI-native offensive capability is arriving from multiple directions simultaneously. Treating it as one company’s announcement means you will be caught flat-footed when the next lab demonstrates the same capability, or when evidence of adversarial AI-assisted exploitation surfaces publicly.
  • Do not wait for a Glasswing-sourced zero-day in your stack to start building response capability. Your software composition analysis, your zero-day response SLA, your AI-native code review integration — none of these require an active threat to build. Building them before the critical finding arrives means you respond in hours rather than days when it does. The organizations that are struggling after a major zero-day disclosure are almost always the ones that knew they should have built the response infrastructure and deferred it.
  • Do not confuse the Glasswing coalition’s access with your organization’s protection. The fact that your cloud provider or EDR vendor is in the Glasswing coalition does not mean the vulnerabilities they find in their own software automatically protect you against vulnerabilities in your custom code, your other software dependencies, or your own infrastructure. Glasswing findings get patched in the software the coalition is hardening. Your organization’s overall attack surface requires your own AI-native security investment.
  • Do not let the “too dangerous to release” framing create a false sense of safety. The fact that Anthropic is not releasing Mythos publicly does not mean adversaries cannot build equivalent capability. It means there is a window — of uncertain duration — during which defenders have access to AI-native vulnerability research and attackers have less mature equivalents. That window is the strategic opportunity Project Glasswing is designed to capture. Treating the restricted release as a permanent containment measure rather than a temporary head start will leave you unprepared for what comes next.

Final Thought

There is a version of this story that the security industry will tell itself in five years — the one where AI-native offensive capability arrived, defenders had a window to respond, and the organizations that acted on it built security programs that could operate at machine speed while the ones that waited ended up managing breaches that no human analyst could have prevented. Project Glasswing is the marker in the timeline that defines when that window opened. The coalition running it is not doing you a favor by hardening the software you depend on. They are running a race that your organization’s security posture either keeps pace with or falls behind. What you do with the window they are buying you is the decision that will define your program’s relevance in the next era of this industry.


If this framing on Glasswing and Mythos changed how you are thinking about your program’s priorities, I want to hear what specifically landed in the comments below. If you have peers who are still treating AI-native vulnerability research as a future-state concern, send this to them now — this is the briefing that changes that conversation. And if you are building out the response capability I described here and want to compare notes, reach out directly. Subscribe to InfoSec Made Easy for the analysis that matters to working security leaders, without the noise.

Popular posts from this blog

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...

AI Governance Security Leadership | NIST AI RMF Series

A practitioner's deep dive into building a real generative AI governance program — from policy to controls to board reporting If you read my earlier post, Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption , you got a solid introduction to why the NIST AI Risk Management Framework (AI RMF) matters and how its four core functions — Govern, Map, Measure, and Manage — provide a structure for responsible AI adoption. That post was intentionally high-level. This one is not. Over the past two-plus decades in security leadership, I have watched organizations repeatedly make the same mistake with emerging technology: they adopt first and govern later. We did it with cloud. We did it with mobile. We are doing it right now with generative AI — and the consequences are more significant than most leadership teams realize. Generative AI is not just another SaaS tool your employees are using without IT approval. It is a...