What Anthropic Announced on April 7
Anthropic made an unusual announcement on April 7, 2026. Rather than releasing a new model to the general public, the company introduced Claude Mythos Preview to a carefully selected group of technology companies. The announcement was covered by TechCrunch, CNN, and CNBC within hours of going live.
Think of it this way. Most AI model releases are like a new car model arriving at the dealership - open to anyone who wants to buy one. This announcement was more like a car manufacturer inviting a small group of professional test drivers to evaluate a vehicle on a closed track, because the performance is so far beyond what is safe to hand to everyday drivers all at once.
Mythos is, by Anthropic's own description, a general-purpose language model that performs strongly across tasks. But its standout capability is in computer security. According to Anthropic's own researchers at red.anthropic.com, in the weeks leading up to the announcement, Mythos identified thousands of zero-day vulnerabilities. Many of these were critical flaws. One of them had been sitting undetected in OpenBSD code for 27 years.
That level of performance is genuinely unprecedented. And it is why Anthropic is being careful about how this model reaches the world.
What Is Project Glasswing?
Alongside the Mythos announcement, Anthropic launched what it calls Project Glasswing. This is the framework governing who can access Mythos Preview and under what conditions.
The partners in Project Glasswing as of April 7 are: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and Nvidia. Each of these companies is using Mythos Preview specifically for defensive cybersecurity work - finding vulnerabilities before attackers do, and sharing what they learn with the broader industry.
Anthropic has committed up to $100 million in usage credits for these partners to conduct this work. Partners who need more capacity beyond that threshold will pay to continue. The company says all participants are expected to share findings that benefit the wider security community, not just their own organizations.
The name Glasswing is a reference to a butterfly with transparent wings - a nod to the transparency and visibility goals of the initiative. Anthropic wants the AI security research happening under this program to be visible and accountable, not hidden inside corporate walls.
Why Is Anthropic Not Releasing This to Everyone?
This is the question most business owners will have, and it deserves a direct answer.
Anthropic is concerned that Mythos could accelerate cyberattacks if it were made widely available. A model that can find vulnerabilities at the speed and scale Mythos demonstrated is a double-edged tool. Defenders can use it to patch systems before attackers find the holes. Attackers can use it to find those holes faster than defenders can respond.
Anthropic's view, as expressed in the company's own safety research notes, is that the benefits of using this model for defense outweigh the risks of restricted access - but only if access is genuinely restricted to parties with credible defensive purposes and strong security practices of their own. Handing it to millions of general users would remove that safeguard entirely.
The company says it plans to eventually deploy Mythos-class capabilities at scale, but only after new safeguards are developed and tested. The Project Glasswing rollout is part of that research process. The partners are not just using the model - they are helping Anthropic understand how to make it safer for broader use.
According to CNBC's coverage, some researchers who reviewed early information about the model described it as a potential "reckoning" for the cybersecurity industry. That is not hyperbole for the sake of headlines. It reflects the genuine gap between what Mythos can do and what current security practices are designed to handle.
How Does This Fit Into the Bigger Picture for AI in 2026?
To understand why this matters for a business owner, it helps to step back and look at the pattern forming in AI development this year.
In the last six weeks alone: OpenClaw became the most-starred software project in GitHub history. NemoClaw launched with NVIDIA's backing to bring enterprise security to AI agents. Anthropic cut off third-party tools like OpenClaw from flat-rate subscriptions because AI usage patterns outpaced what consumer pricing was designed to handle. And now Mythos - a model so capable it needs a controlled release program.
What these events share in common is a theme of capability outpacing infrastructure. AI tools are getting dramatically more powerful faster than the surrounding ecosystem of pricing, security, and governance can keep up. The companies that were early adopters of AI tools six months ago are suddenly dealing with pricing changes, security questions, and access rules they did not anticipate.
This is not a reason to pull back from AI. It is a reason to build your AI foundation more deliberately. The businesses navigating this moment best are the ones who have treated AI as a business decision from the start - not a free experiment.
What Does Mythos Mean for OpenClaw Users?
If you run OpenClaw at your business, Mythos has two implications worth understanding.
The first is about the future of the Claude models you can connect to OpenClaw. OpenClaw supports multiple AI providers, and Claude's model family is one of the most popular choices for users who want strong reasoning and writing capabilities. Mythos is not yet available via the standard API, but the pattern at Anthropic is clear: they build powerful models, test them carefully, and then roll capabilities into their API product lines. Features from today's restricted model tend to become available in API-accessible form within a year or two of the restricted preview.
That means the capabilities Mythos is demonstrating now - extraordinary security analysis, multi-step reasoning, complex code understanding - are likely to be available to API users of Claude in future model releases. If your business uses OpenClaw for research, analysis, or workflow automation, these capabilities will strengthen what your agent can do.
The second implication is about the security environment your OpenClaw deployment lives in. Mythos-class vulnerability discovery means the tools available to attackers are also becoming dramatically more powerful. The same techniques that help Google find holes in its own systems could, in less responsible hands, be turned toward finding holes in your systems. If you are running OpenClaw with exposed interfaces, default credentials, or outdated configurations, the window of time before those weaknesses are found and exploited is narrowing.
If you have not yet read through our guide on what OpenClaw is and how it works, now is a good time to understand exactly what your deployment is doing on your network and what access it has to your business systems.
What Does Mythos Mean for NemoClaw Users?
The Mythos announcement is, in a sense, the clearest possible validation of why NemoClaw exists.
NemoClaw is NVIDIA's security layer for OpenClaw, built to contain what an AI agent can access and do, enforce network rules, and maintain audit trails of agent activity. It was designed with the assumption that AI capabilities would keep accelerating and that the security surface of an AI agent deployment needed to be locked down before that acceleration created problems.
The arrival of Mythos-class capability confirms that assumption was correct. When a single AI model can identify critical vulnerabilities across every major operating system in a matter of weeks, any business running AI agents without proper isolation and access controls is taking on material risk.
What is especially relevant here is that two of the Project Glasswing partners - Cisco and Nvidia - are also deeply involved in NemoClaw's ecosystem. Cisco builds enterprise network security infrastructure. Nvidia built NemoClaw itself. Both companies are using Mythos Preview specifically to harden defenses. That is the same posture NemoClaw is designed to enable for businesses that are not Fortune 500 enterprises with their own security teams.
If you are evaluating whether NemoClaw makes sense for your business, the NemoClaw for business guide walks through the practical deployment questions in plain English. The core argument for it just got considerably stronger.
The Cybersecurity Angle: What Business Owners Actually Need to Worry About
The coverage of Mythos has focused heavily on the dramatic details: a 27-year-old vulnerability discovered by an AI, zero-days found in every major browser and OS. Those details are genuinely striking. But what do they mean for the average business owner running a small or mid-sized operation?
The most direct answer is this: the sophistication gap between professional attackers and typical small business security practices has been growing for years. Tools like Mythos accelerate that gap on the offensive side. But the same tools, deployed defensively by the large companies that make the software you depend on, should also close many of the vulnerabilities that currently exist.
The net effect on your business risk in the near term is modest. The Project Glasswing partners are using Mythos to find and fix the holes in Windows, macOS, major browsers, and widely used infrastructure software. That is genuinely good news. Many vulnerabilities that might have gone undiscovered for years will now be patched faster.
The medium-term risk is different. As AI-based vulnerability discovery becomes more accessible, organized criminal groups and nation-state actors who already run sophisticated operations will eventually get access to similar capabilities. The responsible restricted release Anthropic is running now will not stay restricted indefinitely - either Mythos-class capabilities will be released more broadly by Anthropic, or competitors will develop comparable tools on different timelines.
What this means practically for your business is not "panic." It means: take the standard security practices that have always been recommended seriously enough to actually implement them. Keep software updated. Use strong, unique credentials for every system your AI agents have access to. Know what your AI tools can reach on your network. Review that access regularly.
As TechCrunch reported on Project Glasswing, Anthropic specifically highlighted that the kinds of vulnerabilities Mythos found were often the result of code that had been written once and never revisited. Old integrations. Legacy authentication. Software that "worked fine" and was therefore never looked at again. That profile matches exactly the kind of technical debt that accumulates in small and medium business environments.
A Word on What "Preview" Means
Anthropic is being precise with its language. This is a Mythos Preview, not a Mythos release. The distinction matters because it shapes expectations about timing.
A preview in this context means the model's capabilities have been demonstrated and verified, but the surrounding safety work - the guardrails, the deployment controls, the monitoring systems - is still being developed. Anthropic is using the Project Glasswing deployment to accelerate that safety work in partnership with organizations that have the expertise to help design it.
What Anthropic says it will not do is make Mythos-level capability generally available before those safeguards are in place. That is a meaningful commitment, but it also reflects the company's earlier handling of similar decisions. When Anthropic's previous models showed safety concerns, they delayed or limited availability. The company has a track record of following through on these kinds of statements, which is why the AI safety community takes the Mythos caution seriously rather than treating it as marketing.
For business owners, "preview with controlled access" means: you will not be able to connect OpenClaw to Mythos via the API anytime soon. What you can expect is that lessons from this preview will shape the next generation of Claude models that are API-accessible, and those models will likely be considerably more capable than what is available today.
What the Anthropic Trajectory Means for Your AI Tool Choices
Let's be direct about the broader commercial picture here, because it matters for how you plan your AI investments.
Anthropic has had a difficult few weeks from the perspective of its relationship with the OpenClaw user community. Cutting off subscription-based access to OpenClaw was disruptive and frustrating for many users. The Mythos announcement, coming just three days later, is a reminder of something important: Anthropic is a research-first company building genuinely advanced AI, and its commercial decisions - even the unpopular ones - reflect the resource demands of that research.
The subscription cutoff was not a slight against OpenClaw users. It was a resource allocation decision by a company that is simultaneously operating massive AI infrastructure and developing models like Mythos that require enormous compute to train and run. The $100 million Anthropic is committing to Project Glasswing alone gives a sense of the scale they are operating at.
This is context you need if you are making decisions about which AI models to connect to your OpenClaw deployment. Anthropic is not a company in decline or retreat. It is a company managing extraordinary capability development carefully. That is exactly the kind of organization you want building the AI your business depends on.
The practical takeaway for your tool choices: Claude models available through the API will continue to improve as the company's research progresses. Running OpenClaw or NemoClaw connected to Claude via a proper API key - rather than a consumer subscription - puts you on the correct commercial relationship with Anthropic for the long term. You pay for what you use, you get access to improvements as they roll out, and you are not exposed to the kind of sudden access changes that affected subscription users last week.
How to Think About AI Model Tiers as a Business Owner
The Mythos situation is a good moment to think clearly about how AI model tiers work, because this is something that directly shapes what you should connect to OpenClaw for different tasks.
AI companies develop models across a range from fast-and-cheap to slow-and-powerful. The fast, cheap models are good for routine tasks: summarizing a document, reformatting data, answering simple questions from a knowledge base. The powerful models are better for complex reasoning: analyzing a legal clause, comparing multiple options with nuanced tradeoffs, writing from scratch in a specific voice.
Mythos sits at the far end of the powerful spectrum - so far, in fact, that it is not yet being offered commercially. Below it, the current Claude API lineup includes models suited for serious business reasoning tasks, and lighter models designed for high-volume automation where speed and cost matter more than depth.
When you configure OpenClaw for your business, you do not have to use the same model for everything. A well-configured OpenClaw setup uses lighter, faster models for routine background tasks and reserves the more capable models for the tasks that genuinely benefit from that power. That is how you get strong AI performance at manageable cost - and it is also how you stay prepared for the next generation of capable models when they become available.
If you are new to OpenClaw and want to understand how this fits together from the beginning, the start here guide walks through the fundamentals in plain English before you get into configuration decisions.
Is Mythos a Sign That AI Is Becoming Too Powerful to Use Safely?
This is a question that comes up every time there is a major AI capability announcement, and it is worth taking seriously rather than brushing aside.
The honest answer is: it depends on your definition of "safe." AI systems that can find vulnerabilities at scale are powerful tools that can be used for defense or offense. The same was true of search engines when they made it trivially easy to find information that was previously hard to access. The same was true of encryption tools, cloud computing, and networked systems generally.
The way Anthropic is handling Mythos - controlled access, defense-focused partners, transparent safety research, a commitment not to broadly deploy until safeguards are ready - is the responsible version of this situation. It does not eliminate risk. It manages the transition deliberately rather than racing to maximize distribution at the expense of safety.
For business owners, the answer is more specific: AI is not too powerful to use safely if you deploy it with proper controls. The companies that are going to have the hardest time with AI - whether from security incidents, unexpected tool behavior, or policy changes - are the ones that deployed AI as a free experiment without treating it as infrastructure. The ones building on a proper foundation, with API access, defined permissions, clear audit trails, and security layers like NemoClaw where needed, are in a much stronger position.
The gap between "AI as a casual tool" and "AI as managed business infrastructure" is closing fast. Announcements like Mythos are part of what is forcing that transition. That is not bad news for businesses that are ready for it.
What to Watch For in the Coming Weeks
The Mythos story is not finished. Here are the developments worth watching for anyone running AI tools at their business.
Vulnerability patches: As Project Glasswing partners use Mythos to find issues in major software, expect a wave of security updates from Microsoft, Apple, Google, and others over the coming months. When those updates come out, treat them as high priority. The vulnerabilities being patched may have been found by Mythos, which means they were serious.
API availability signals: Watch Anthropic's model release notes for signals about when Mythos-class capabilities will start appearing in their API lineup. Typically, capability previews at this level translate to API features within 12 to 18 months. A release in late 2026 or early 2027 would be consistent with that pattern.
Competitor responses: OpenAI, Google DeepMind, and other major AI labs will have seen this announcement closely. Expect similar capability disclosures and safety frameworks from those organizations in the coming months. The race to demonstrate responsible handling of powerful models is becoming as competitive as the race to build them.
NemoClaw updates: NVIDIA is one of the Project Glasswing partners. That means the same company building NemoClaw's security architecture is also working directly with Mythos. Expect future NemoClaw updates to reflect what NVIDIA learns from that work. The security architecture in NemoClaw may get meaningfully stronger as a result.
The Bottom Line
The April 7 Mythos announcement is a meaningful moment in AI development, and it deserves more attention from business owners than most AI news stories. Here is why.
Every generation of AI capability raises the stakes for getting your AI deployment right. A modest AI assistant that helps you draft emails has modest consequences if it is misconfigured. An AI system capable of finding zero-day vulnerabilities in every major operating system has much larger consequences. The model your OpenClaw deployment connects to is not that powerful - but it is building toward it, one release at a time.
The businesses that will navigate this decade of AI capability growth best are the ones treating AI as infrastructure from the beginning: proper API access, defined permissions, security layers where appropriate, and attention to what the companies they depend on are building and deciding. Not every announcement demands action, but every announcement should inform your understanding of where this is going.
Mythos is not something you can use today. But it tells you exactly where the frontier is, and it tells you that the frontier is moving faster than most people anticipated.
If you are evaluating your current AI agent setup in light of these developments, the NemoClaw security explained page covers the specific controls that enterprise OpenClaw deployments use to manage the kind of capability and access risks that announcements like Mythos make more urgent.