Three things happened in the last 48 hours that tell you where the AI agent market is actually heading. None of them are incremental. Taken together, they paint a picture of an industry that is moving much faster than most business owners realize, in directions that directly affect the tools you are evaluating, the vendors you trust, and the cybersecurity posture you need to think about right now.

Let us go through each one.


Story 1: A Self-Improving AI Agent Just Knocked OpenClaw Out of the #1 Spot

As of May 10, 2026, Hermes Agent, built by Nous Research, has overtaken OpenClaw to hold the number one position on OpenRouter's global daily app and agent rankings. Hermes is currently generating 224 billion daily tokens on OpenRouter compared to OpenClaw's 186 billion. That is not a small margin. Hermes is running at roughly 20 percent higher usage volume than the previous leader, and that lead arrived in just over three months from launch.

If you are not tracking AI agent tools yet, that number probably does not mean much. Here is why it should.

OpenRouter is the closest thing the AI industry has to a neutral usage scoreboard. It routes requests across hundreds of AI applications and agents, which means its rankings reflect real-world adoption rather than press releases or GitHub stars. When something moves to the top of OpenRouter's daily rankings, it is because real users are choosing it for real work, over and over again.

What is Hermes Agent doing that knocked OpenClaw off the top? The short answer: it learns.

The "Do, Learn, Improve" Architecture

Most AI agents today follow a simple loop. You give the agent a task. It executes the task. You move on. Every conversation starts fresh. The agent that just finished writing your weekly report has no memory of having done it before and no ability to get better at it over time.

Hermes Agent takes a fundamentally different approach. After it completes a task, it enters what its team calls a reflective phase. During this phase, the agent analyzes its own performance, identifies what worked, and autonomously generates what it calls skill files. These skill files are reusable packages of task logic that the agent stores and retrieves the next time a similar task comes up. The longer you run Hermes, the better it gets at your specific workflows.

The memory architecture underneath this is built in three layers. The first is a persistent snapshot of both user and agent identity, so the agent knows who it is working with and what context matters. The second is a SQLite full-text search database of every past session, giving it the ability to recall relevant history across weeks or months. The third is the procedural skill layer, where those auto-generated task templates live.

Think of it this way. A traditional AI assistant is like hiring a new temp every morning and spending 20 minutes explaining your business before they can start. Hermes is closer to an employee who remembers everything, writes down what works, and gets faster every week.

How Does This Compare to OpenClaw?

OpenClaw is built around a different philosophy. Its strength is breadth. It connects to over 50 messaging platforms simultaneously, including Telegram, Discord, Slack, WhatsApp, Signal, and many others, through a central routing layer. The value proposition is that your AI can operate everywhere your business communicates, from a single setup.

OpenClaw also had a leadership transition earlier this year. Its founder, Peter Steinberger, joined OpenAI in February 2026, and OpenClaw moved to an independent open-source foundation with OpenAI as a sponsor. That transition appears to have slowed its development cadence somewhat, giving Hermes room to gain ground.

Hermes meanwhile has shipped a confirmed major release roughly every two weeks since its February 2026 launch. Its most recent release, version 0.13.0 called "Tenacity" and shipped May 7, introduced a durable multi-agent task board with zombie detection and automatic hallucination recovery, a goal-locking command that keeps the agent on target across long conversations, and Google Chat as its 20th supported messaging platform. That is a release cadence that few open-source projects maintain.

What This Means If You Are Evaluating AI Agent Tools Right Now

The honest answer is that neither tool is perfect for every situation, and the right choice depends on what you actually need.

If your primary concern is connecting AI to every communication channel your business uses and you want broad platform support out of the box, OpenClaw still has the edge on raw channel coverage. If your primary concern is having an agent that gets smarter about your specific business over time and improves its own performance without requiring constant manual instruction, Hermes is building something compelling.

The bigger takeaway for business owners is this: the self-improving agent category is real. We covered this possibility when we wrote about the first generation of agent tools, and now it has a real product with real adoption numbers. If you have been waiting to see whether agents would actually get better with use rather than staying static, the answer is arriving in May 2026.

One security note worth adding: OpenClaw's rapid expansion has come with documented security costs. A CVE disclosure in April 2026 detailed vulnerabilities linked to its broad integration surface. Hermes operates with a narrower integration footprint, which tends to mean a smaller attack surface. Neither is necessarily safer in all contexts, but if security is your deciding factor, that tradeoff is worth understanding before you choose.


Story 2: OpenAI Shares Its Cyber AI With Europe While Anthropic Holds Back

On Monday morning, OpenAI announced it would grant the European Union access to GPT-5.5-Cyber, a variation of its latest frontier model tuned specifically for cybersecurity applications. European businesses, governments, cyber authorities, and EU institutions including the EU AI Office will be able to access the model through a program OpenAI is calling its EU Cyber Action Plan.

At the same time, Anthropic is still not granting the EU access to Mythos, its own cybersecurity-oriented model that it released about a month ago. The EU Commission spokesperson Thomas Regnier confirmed this Monday, noting that discussions with Anthropic are at a "different stage" than the agreement reached with OpenAI. The Commission has had four or five meetings with Anthropic, but nothing has been finalized.

George Osborne, OpenAI's Head of OpenAI for Countries, framed the company's decision in clear terms: "AI labs like ours should not be the sole arbiters of cyber safety as resilience depends on trusted partners working together." The EU commission called OpenAI's move a sign of "transparency and intent."

Why Business Owners Should Pay Attention to This Story

This story is not just about European politics. It is about a growing divide in how the major AI labs view their relationship with governments, regulators, and oversight bodies, and that divide will shape which tools you can actually trust for sensitive business work in the years ahead.

OpenAI's move to open GPT-5.5-Cyber to EU review is a bet on transparency as a competitive advantage. The logic is straightforward: if regulators can see what your model does and verify its safety claims, they are less likely to restrict access to it. That benefits European customers and creates goodwill with regulators who have significant power over AI deployment in the EU market.

Anthropic's approach with Mythos has been different. The company built the model, deployed it to paying customers, and has since been managing concerns about its potential misuse in cyberattacks. A CNBC report from April noted that Mythos' release prompted "a wave of fears around cyberattacks on critical software." That context explains why the EU wants to review it, and why the friction with Anthropic is notable.

For a business owner, the practical implication is this: when you choose an AI vendor for any work that touches sensitive data, customer information, or business-critical systems, you are not just choosing a model. You are choosing a compliance posture. OpenAI is currently making a clearer argument that its tools can meet regulatory scrutiny. That is a material consideration if your business operates in regulated industries, serves European customers, or is preparing for eventual AI regulation in the United States.

The cybersecurity AI category specifically is one where vendor credibility matters more than almost anywhere else. A model that can analyze threats, identify vulnerabilities, and suggest patches can also be misused to do the same things offensively. The labs that are willing to subject their most powerful cyber tools to independent review are making a different kind of promise than those that are not.


Story 3: Anthropic Just Made Two of the Biggest Infrastructure Bets in AI History

Alongside the news about Mythos and the EU, Anthropic also disclosed two significant infrastructure deals this month that deserve separate attention.

First, Anthropic has reached an agreement with xAI to use their Colossus 1 data center to expand model capacity. Colossus 1 is the supercomputer that Elon Musk's AI company built in Memphis, Tennessee, housing over 100,000 NVIDIA H100 GPUs. Having Anthropic as a compute customer at Colossus is an unusual development, given that xAI and Anthropic are competing AI labs. But as compute becomes the scarce resource that determines what is possible at the frontier, strategic partnerships across nominal competitors are becoming more common.

Second, Anthropic has signed a $1.8 billion cloud computing deal with Akamai Technologies. Akamai is one of the largest content delivery and cloud infrastructure providers in the world, operating in over 130 countries. This partnership is specifically aimed at expanding Claude's availability and reliability at a global scale.

What This Tells You About the AI Compute Race

Taken together, these two deals signal something important about where the AI industry is in 2026. The frontier AI labs are no longer competing primarily on model quality alone. They are competing on infrastructure, reliability, and global reach.

OpenAI, Google, Meta, and Anthropic are all spending billions of dollars building out the compute and distribution layers that turn a smart model into a reliable business service. The question is no longer whether the model is impressive in a demo. The question is whether it can handle your business's actual workload at the scale, speed, and uptime your operations require.

The Akamai deal in particular is interesting from a business perspective. Akamai's specialty is not building AI. Its specialty is delivering content and services reliably at global scale, handling latency, distributing traffic intelligently, and keeping things online when demand spikes. That is exactly the infrastructure problem that AI companies struggle with when they go from "impressive product" to "enterprise service."

If you have experienced Claude going down during a critical work session, or if you have been concerned about AI service reliability for production workflows, this kind of deal is what the labs are doing to address that. It will not solve the problem immediately, but it signals that Anthropic is taking the infrastructure question seriously rather than treating it as secondary to model research.


The Thread Running Through All Three Stories

If you zoom out from the individual headlines, there is a single theme connecting all three of these developments: the AI agent and AI service market is maturing rapidly, and the standards for what counts as a serious, trustworthy product are rising fast.

Hermes Agent is raising the bar for what an agent should do over time. A tool that learns and improves is genuinely different from a tool that stays static, and the market is responding to that. Businesses that adopt agents with self-improvement capabilities are going to compound their productivity advantage over time in a way that businesses using static tools will not.

The OpenAI and Anthropic story around GPT-5.5-Cyber and Mythos is raising the bar for what accountability looks like in AI deployment. The EU is not going away, and neither is AI regulation in general. Labs that cooperate with oversight and submit their models to review are going to have an easier path to enterprise adoption than those that do not. If you are building a business that depends on AI tools for anything sensitive, understanding which vendors have demonstrated willingness to be accountable to external review is a real factor to track.

And the Anthropic infrastructure deals are raising the bar for reliability. Billion-dollar compute and distribution agreements are not the moves of a lab that plans to stay in the research playground. They are the moves of a company preparing to deliver services that businesses can actually depend on.

What You Should Do Right Now

Three concrete actions based on this week's news.

If you are evaluating AI agent tools: Add "does this agent learn and improve over time" to your evaluation criteria. Hermes Agent's rise shows there is a real market for self-improving agents. Before you commit to any agent platform for the next year, understand whether it has a memory architecture that compounds value or one that starts fresh with every session.

If you operate in a regulated industry or serve customers in Europe: Start tracking which AI vendors have demonstrated willingness to submit their tools to external review. OpenAI's decision to open GPT-5.5-Cyber to EU scrutiny is a preview of how regulatory compliance will work going forward. The vendors who build transparency into their product strategy now will be easier to work with when compliance becomes mandatory rather than voluntary.

If reliability has been a sticking point in your AI adoption: Watch Anthropic's infrastructure investments closely over the next six months. The Akamai deal and the Colossus arrangement are designed to solve exactly the uptime and scale problems that have made some businesses hesitant to put Claude into production workflows. If those investments translate into measurably better reliability, it changes the calculus on whether Claude is ready for critical business use.

The Bottom Line

The AI agent market in May 2026 is not the same market it was six months ago. A new category leader has emerged based on a fundamentally different architecture. The major AI labs are staking out positions on regulatory cooperation that will matter to enterprise buyers. And the compute arms race is driving infrastructure investments at a scale that signals this is no longer a speculative industry.

None of this means you need to rebuild your business around AI tomorrow. What it does mean is that the tools available to you are getting genuinely better, the vendors are getting more serious about reliability and accountability, and the window for figuring out where AI fits in your operations is narrower than it was a year ago.

The businesses that are learning how these tools work now, while the market is still sorting itself out, are going to be in a very different position than those who are still watching from the sidelines when the dust settles.


Sources: MarkTechPost (OpenClaw vs Hermes Agent analysis, May 10, 2026), CNBC (OpenAI EU Cyber Model, May 11, 2026), Benzinga (OpenAI EU Cyber Action Plan, May 11, 2026), Wikipedia/Anthropic (Colossus and Akamai deals, May 2026).