The web just got a second audience

Alasdair Allan
18 February 2026

Buried in a quiet announcement on a developer mailing list at the beginning of last week was a point of inflection, the point where the web stopped being just for humans. WebMCP lets websites declare what they can do for AI agents, in the same way they currently serve pages to humans, and last week Google shipped an early preview. It was hidden behind a flag, and for a few days almost nobody noticed. But I think it might be the moment the web stopped being a human-only medium.

These points of inflections are often quiet.

Back in the early nineties I was working on compression for video calling, attempting to squeeze what would now be regarded as small black and white images down copper cables. The work was intended for video phones, actual landline hardware that would have a screen as well as a number pad. I even got it working, sort of, but it never went anywhere. Video calling had been the next big thing for a decade at that point, and had just never caught on.

But the arrival of the iPhone in 2008 changed everything, and when Apple announced FaceTime two years later, video calling finally took off, quietly, without much fanfare. I remember thinking it wouldn’t work, because it never had before. The technology hadn’t fundamentally changed. What changed was the friction. A button on a device everyone already owned, connected to a network everyone was already on. That was enough. Suddenly people were using it.

The same thing happened with contactless payments. We all quietly stopped carrying our wallets and started using our phones instead. Nobody announced it. There was no launch event. We all just figured it out, individually, and one day it was done.

I think WebMCP is that kind of moment for the web.

The technical details matter here, because the mechanism is the story. Today when an agent needs to interact with a website it does so in the way a person might navigate a foreign city blindfolded. It takes screenshots, parses the DOM, simulates mouse clicks. Tools like Playwright and Selenium let agents pretend to be humans filling in forms. It works, barely.

WebMCP replaces this with something conceptually simple and architecturally radical. A website registers tools with natural-language descriptions that an agent can discover and call directly. No scraping. No screenshots. No pretending to be human. The site declares what it offers, and the agent calls it.

This is the difference between declaration and discovery. Today, agents have to figure out what a site can do by looking at it. WebMCP lets the site tell them. That’s the gap between “technically possible” and “actually happens.” It’s the FaceTime button.

We could have gone down a path where we had sites for humans, and applications for agents. Instead we’ve ended up with everything mixed together. However, this dual-use model, where sites serve both humans and agents through the same interface, feels inefficient. And it probably is. But it’s almost certainly a transition. By mixing the two audiences together, I think we’ve ensured a future where the purpose of web sites separates as well. Because someone is already charging rent for that second audience.

You can imagine a future where humans look at the web for information, but agents act on the web. A future where the idea of going to a site and manually booking a flight has become as outdated as writing a cheque with a pen. On paper, like a savage.

Because it’s not just the preview of WebMCP that’s signalling that things are changing. Within days of Google’s announcement, Cloudflare shipped something that tells the same story from a completely different angle. Their new “Markdown for Agents” feature converts standard HTML pages into clean markdown when agents request them, stripping out scripts, layout code, and formatting overhead.

WebMCP is the active side: websites declaring what they can do for agents. Cloudflare’s move is the passive side: websites optimising how they present themselves to agents. One is about capability, the other is about content. But they’re both the same fundamental acknowledgement: the web’s audience has forked, and the infrastructure is being rebuilt to serve both parts.

The Cloudflare move is arguably more telling, because it’s not a specification proposal or a developer trial. It’s a CDN company that handles a significant portion of global web traffic, making a commercial decision to treat agents as a first-class consumer right now, at the network edge. That’s not aspirational. That’s operational. But when both the browser makers and the CDN companies start treating agents as a primary audience, that’s a point of inflection.

There’s a historical echo here that I find hard to ignore. More than 25 years ago, Tim Berners-Lee published his vision for the Semantic Web in Scientific American. It was a web where software agents would navigate machine-readable pages, automating tasks and exchanging structured data. It was a beautiful idea. It required RDF, OWL, and SPARQL, and the barrier to entry was so high that almost nobody bothered. The chicken-and-egg problem was lethal: not enough structured data to make agents useful, not enough agents to incentivise the markup.

WebMCP solves this problem through a fundamentally different mechanism. Instead of requiring universal ontologies, it leans on language models that can handle ambiguity and natural language descriptions. Instead of demanding new markup languages, it wraps existing JavaScript business logic. The barrier to entry is orders of magnitude lower this time around.

The semantic web’s dream of machine-readable services is finally achievable. Not through perfect structure, but through good-enough structure interpreted by capable models. The machines got smarter, so the markup can be simpler.

For the kinds of companies Negroni typically works with, vertically integrated businesses that own both hardware and the software that drives that hardware, WebMCP and the coming second audience creates an opportunity.

A company that owns the full stack, from sensor to dashboard to service, can declare its entire capability surface as agent-accessible tools. An industrial monitoring company doesn’t just expose data through an API, it exposes actions: “adjust this threshold,” “schedule this maintenance,” “order this replacement part.” The integration is end-to-end because the company owns every layer. A pure-software company can’t do this without hardware partnerships. A pure-hardware company can’t do this without building the web layer. Vertical integration becomes an architectural advantage in the agent era, not just a business-model preference. The question becomes: which companies will own the capabilities that agents call, and which will just be plumbing?

But there are things that give me pause.

First, security. The WebMCP spec documentation explicitly acknowledges that prompt injection is unsolved. Cross-site agent attacks, where a compromised agent carries tainted instructions from a malicious site to a legitimate site’s tools, represent an entirely new threat surface. The Chrome team is working on it, but “working on it” is not the same as solving the issue. Adoption will stall until this is credibly addressed.

Next, incentives. Websites that currently control the full user journey have strong reasons not to expose structured interfaces to agents. If an agent can call your search tool directly, it can also skip your up-sell page, your promotional banner, and your carefully designed conversion funnel. That’s not speculation, that’s already happened.

Finally there’s the browser coverage question. Mozilla and Apple have given no signal on WebMCP adoption. This is currently a Chromium-ecosystem initiative, which means significant reach, but not universality. It’s a standard that only works in two-thirds of browsers; it’s a standard with an asterisk. We saw this before with Bluetooth LE, and what I always used to call the “fifty-percent problem,” before Apple finally gave in and shipped an iPhone with support for BLE outside of the MFi Program.

But despite this, I do keep thinking about the FaceTime comparison. Because it’s a lesson that isn’t anything to do with video calling: it’s about recognition. I spent years working on the underlying technology, watched it fail repeatedly, and still didn’t recognise the moment it crossed the threshold. The capability was the same. The friction was different. And the difference between “technically possible” and “everyone uses it” turns out to be entirely about friction.

WebMCP reduces the friction of agent-web interaction by roughly an order of magnitude. Cloudflare’s Markdown for Agents reduces the friction of agent content consumption by more than three quarters. Both shipped this month, both were quiet announcements. Neither made the front page.

I’ve been wrong about the timing of these things before. However the infrastructure for agent-mediated web interaction is being built into the two largest content delivery systems on Earth, the dominant browser and one of the dominant CDNs right now, while most companies are still debating whether to build their first MCP server.

The lesson from every previous platform shifts is the same, that the companies that build for the new interface layer early capture disproportionate value. But it’s the inflection points you don’t notice are the ones that matter most, and like every inflection point I’ve lived through so far, whether I’m right this time around will only be obvious in retrospect.

View all postsBack to top