Web3 was coined back in 2014 to describe a decentralized, blockchain-powered web. After 11 years, its impact on the average web user remains virtually nonexistent. Compared to the recent impact of AI and LLMs, it's painfully clear that the Web3 moniker was wasted on crypto.
I'm now convinced the real Web 3.0 is emerging in the AI era, with Model Context Protocol (MCP) as its driving force.
For decades, we've built UIs around buttons, forms, and clicks to frame our intents within well-defined, structured schemas. Think about the last time you used Jira. Now, with LLMs, our intent can be derived and acted on without a strict definition. This transition represents a more profound change than cryptocurrency ever delivered, so it's fitting we re-assign "Web 3.0” to this sea change.
I believe MCP will emerge as the backbone of this new era. Just as REST APIs enabled Web 2.0 by standardizing how applications communicate with each other, MCP aims to standardize an API between LLMs and applications. MCP allows LLMs to turn natural language into actions without custom integrations across all your applications and services.
The implications are enormous. Imagine a world where you can interact with all your apps within a ChatGPT-like interface, including Jira, Slack, Gmail, GitHub, etc. No more endless point-and-click. You simply express your intent, and the underlying technology handles the details while asking you for input and permission when required. This is the promise of MCP, and it's already gaining momentum with major players like OpenAI, Anthropic, and Google, all of whom are quickly building support for it into their models.
Before diving into why MCP matters, let's quickly revisit the web's previous eras:
The crypto community promised to bring about a new era of the web that was decentralized, permissionless, and centered around ownership. The original vision of Web 3.0 was ambitious — replace traditional web architecture with blockchain-based alternatives, shifting control from corporations to individuals.
They attempted to reinvent fundamental internet infrastructure: DNS would be replaced by ENS (Ethereum Name Service), cloud storage by decentralized alternatives like Filecoin, and even traditional finance by DeFi protocols. However, almost none of these blockchain replacements have found true mainstream adoption.
Despite billions in funding and endless hype cycles, the "ownership web" failed to materialize meaningfully. Average internet users haven't flocked to decentralized social media platforms, aren't storing their data on distributed networks, and certainly aren't managing their cryptographic keys. The idea that Web 3.0 would revolutionize ownership never seemed to take hold beyond a relatively small community of enthusiasts.
Ironically, AI is now redefining what ownership even means. With generative models trained on vast datasets that can produce code, images, text, and music mimicking existing styles, the very concept of digital ownership is being fundamentally challenged. Who owns an image created by an AI that learned from millions of human-created works? The boundaries are blurring in ways blockchain advocates never anticipated.
So, it’s possible that crypto may still have a role to play in the AI-powered Web 3.0 era. Use cases like verifying AI outputs to combat deepfakes or tracking provenance via ownership logs come to mind. But even in the most optimistic scenario, crypto would be simply a minute implementation detail of a Web 3.0 wave dominated by the broader AI-driven transformation.
Remember when this was the cutting edge of UI?
I'm increasingly finding that tools like Jira feel like Windows 95 after spending so much time using AI. They require multiple clicks, constant switching between keyboard and mouse, and numerous steps to accomplish something as simple as creating a TODO item. The idea of clicking through forms, navigating project hierarchies, and engaging with complex UIs are likely to become artifacts of the Web 2.0 era.
In the not-too-distant future, we'll laugh at how inefficient this process was. We spend minutes manually inserting DB fields for Jira rather than explaining the tasks we want it to create for us. Similarly, we'll marvel at how many SaaS companies—essentially glorified programmable spreadsheets with point-and-click interfaces—were collectively worth trillions of dollars.
It follows that we'll use natural language to express intent, and applications like Jira, Salesforce, and other SaaS tools will handle the mundane button-clicking for us. These apps will transform into systems of record with resources and tools that both LLMs and humans can seamlessly interact with.
SaaS companies seem aware of this shift and are scrambling to add ChatGPT-like interfaces to their applications. But this assumes that point-and-click browsers will remain the default way to interact with the web. I don't believe that's true anymore.
My own behavior has shifted dramatically. I rarely use Google directly these days—instead, I go straight to ChatGPT, Perplexity, or similar AI tools. Traditional searching feels slow and clumsy compared to simply asking a question and receiving a detailed, contextual answer. The SEO gaming, over-fluffed articles, and incessant ads have become unbearable.
I'm not necessarily claiming that literally ChatGPT.com is the new browser. But something ChatGPT-like, with a chat-first interface with widgets and tools embedded within them—similar to how Claude can run web apps directly in its UI or open a document interface on demand.
REST was transformative for Web 2.0. It provided a standard, well-supported interface for application interaction. Today, virtually every application offers a developer section with well-maintained SDKs. Before REST, I remember having to write custom HTML scrapers and directly hit websites to build similar functionality.
The current approach to LLM-application interaction feels similarly custom and fragmented. Either LLM companies work on bespoke 1-1 integrations with partners, or developers build custom Python integrations for their own LLM apps or agents. It follows that something like REST would be transformative for LLMs, providing a generic interface that anyone could program against.
What makes MCP even more powerful than REST is that it delivers direct value to non-developer end users. Unlike Web 2.0, where a frontend had to wrap the REST API into a usable interface, in this new paradigm, you simply point your LLM chat app at the endpoint, and the interface already exists in text form.
For most people, the term "MCP" came to them by way of some LinkedIn AI influencer who's never built out an MCP server beyond a trivial tutorial. Furthermore, there was a mad dash to build an MCP-API for every app in existence and throw it on HackerNews for quick exposure and to signal you're a member of the AI avant-garde. Every single one of these servers, including all the "official" ones, runs on stdio and requires you to manually copy-paste in an access token in plaintext.
Many articles have been written about the security risks inherent in how MCP is currently being used. Put succinctly, most people are just curling and running exec on any random script they find online that says MCP on it, and copying their access tokens in plain text to it.
MCP has an HTTP-based transport, but it's quite rough. The auth specification is currently being re-written since the last finalized spec (partially due to our last blog post about what MCP could do better). And none of the Python SDKs support any of the new functionality in the new spec.
One of the creators of MCP has reiterated their commitment to the stdio based transport. We heartily disagree with this direction. The promise of MCP is the same as the promise of REST. It allows companies to release general APIs on the web that can be interacted with by humans and LLMs. It is not to have users download random server code, exec them locally, and interact with them over stdio.
Companies need to begin hosting API endpoints on their own verified URLs so people can feel safe interacting with them and not risk random third-party code. MCP clients should make it easy for even non-technical users to interact with these endpoints, similar to how a browser lets a non-technical user go to a website.
As MCP evolves from a theoretical standard to production-ready infrastructure for the web, companies will naturally want to make their internal data accessible in this new paradigm. Thus far, enterprise data has only been connected to AI via custom integrations and manual pipelines. This is exactly the challenge Featureform was built to solve.
Featureform's MCP integration extends our semantic layer's capabilities beyond serving features for predictive models to enabling multi-step analytical workflows driven by AI. The integration connects your organization's data directly to AI applications through a secure, governed interface.
Our MCP integration will enable:
For example, when a user asks an AI assistant to build a financial plan, the agent uses Featureform to discover and access relevant user features like credit scores, income history, spending patterns, and regional economic indicators—all while respecting access controls and data governance policies.
The semantic layer that previously managed feature definitions and transformations now becomes a knowledge graph that AI systems can navigate to find exactly the data they need, when they need it. This eliminates the traditional bottlenecks where data access required custom pipelines or manual queries for each new use case.
This update aligns with the industry shift from isolated predictive models to integrated AI systems. Featureform's MCP interface reduces the technical barriers between data assets and the teams who need them.
This expansion of our vision is precisely why we've invested in making MCP enterprise-ready. While exploring MCP's potential, we quickly identified critical gaps between its promise and production reality. The security, scalability, and authentication challenges were significant barriers to adoption in real-world business environments.
To address these challenges, we're releasing MCPEngine, an open-source (MIT-licensed) project designed to bring enterprise-grade capabilities to the MCP ecosystem. MCPEngine consists of two key components:
MCP-Gateway — Since most LLM clients are currently hardcoded to work only with stdio, we built a lightweight Go-based proxy that bridges the gap between MCP's stdio and HTTP transports. When Claude or another LLM hits a protected resource, the gateway returns a standard HTTP 401 Unauthorized and initiates a familiar OAuth flow. The LLM simply displays a login link, and the gateway handles the rest transparently without requiring changes to Claude, ChatGPT, or other clients.
MCPEngine Core — This fully-featured, spec-compatible MCP server framework implements native authentication and authorization. It allows developers to expose secure MCP endpoints without resorting to hacks or workarounds, making it possible to build production applications that leverage MCP's capabilities.
MCPEngine is currently the only Python MCP implementation that fully supports the emerging auth specification, with plans to add production essentials like OpenTelemetry integration, rate limiting, and audit logging.
AI will usher in the real Web 3.0 age, where we move from point-and-click SaaS to AI-powered intent engines. Apps like Jira become systems of records with proper MCP APIs to allow users to interact with them either manually via LLM chat apps or via agentic workflows.
But for this vision to become a reality, MCP needs to evolve from a stdio-first, local-only protocol to a robust, HTTP-native standard with proper authentication and security. The current implementation, while promising, has significant limitations that make it unsuitable for enterprise use cases.
That's why we've created MCPEngine—to bridge the gap between MCP's promise and its current limitations. We believe that MCP has the potential to transform how we interact with applications on the web, and with MCPEngine it becomes possible to build secure, production-grade MCP services. It will always remain open-source under MIT and its sole purpose it to facilitate the uptake of MCP by more companies.
The real Web 3.0 isn't built on blockchains. It's built on language models, intent understanding, and protocols like MCP that allow seamless interaction between humans, LLMs, and the applications we use every day.
See what a virtual feature store means for your organization.