About WellKnownMCP
What we stand for and where we're going
Why MCP
Because prompts are not enough. Because agents need intent, not just instructions. Because the web needs a grammar again.
MCP gives language back its edges. It makes meaning portable, structure explicit, and trust inspectable.
We donβt just want to connect models to data. We want them to read us, and be accountable.
MCP is a minimum viable alignment protocol β a handshake between meaning and verification.
π‘ The Trust Triangle
- WellKnownMCP: Specification and context discovery protocol. The full specification is github, on a public repository, open to contribution (opensource@wellsknownmcp)
- LLMCA: Certification Authority ensuring feed integrity and trustworthiness.
- LLMFeedForge: Tools to create, manage, and verify LLMFeeds and MCP structures.
The Manifesto
We believe the future of the web is not just about content β itβs about context. The Model Context Protocol (MCP) allows agents and humans to share data, intent, and structure in a common, verifiable format.
The MCP is not a product. Itβs not a business model. Itβs a civic decision:
- To make AI dialogue transparent
- To make websites agent-readable
- To make data certifiable and portable
If you believe in interop, openness, and structure over hype: welcome.
This protocol belongs to no one. And to everyone.
π§ Prompt engineering β agentic web
Prompt engineering is a powerful skill β but it belongs to closed environments. It helps engineers craft specific outputs from a model. But users donβt want to engineer their way into basic services.
MCP flips the model: Sites declare, agents interpret, users act β simply, clearly, and verifiably.
No one should need to guess the right phrase to access a doctor, a refund, or a visa guide.
π€ Decentralized trust, not centralized control
How do we avoid abuse? How do we prevent overpromising? Not through top-down moderation β but through:
- π Declarative transparency
- π¬ Agent-human explanations
- π User feedback loops
The early web thrived not because of rules, but because of adoption. MCP follows the same path β but for agents.
π From SEO to AIO
In 2000, websites optimized for Google. In 2025, they optimize for agents.
Agent Indexing Optimization (AIO) isnβt about keywords β itβs about declaring structured meaning.
The best prompt is no prompt β itβs a contract, signed and discoverable.
About WellKnownMCP
WellKnownMCP is an open initiative dedicated to developing, promoting, and maintaining the Model Context Protocol (MCP), an interoperable and secure standard that connects Large Language Models (LLMs) to external data, tools, and contexts.
Our Purpose
Our goal is to simplify the integration of AI-driven capabilities across diverse platforms and industries by providing:
- A universal protocol: Standardizing how LLMs access external resources.
- Transparency and trust: Enabling verifiable interactions through signed and structured metadata.
- Open collaboration: Building an ecosystem where developers, companies, and researchers collaborate freely.
Who We Are
WellKnownMCP is community-driven, supported by developers, researchers, and leading AI organizations committed to an open, interoperable future.
π A strategy rooted in the real web
WellKnownMCP is not just a spec. It's part of a long-term vision supported by the Respira Foundation.
We believe that adoption happens through a three-part foundation:
- π§± A clear, documented and public standard
- π‘οΈ A decentralized, auditable certification model
- π οΈ Practical tooling and real-world demos, open-source or commercial
This is why the ecosystem is structured around three public pillars:
- wellknownmcp.org β the standard & narrative
- llmca.org β the certifier & trust framework
- llmfeedforge.org β the tool suite (possibly commercial or hybrid)
The Respira Foundation also carries the long-term ambition of creating a contextual web β not just of documents, but of verifiable intent.
Thatβs what we call MCP-Net.
One day, it may even deserve its own top-level domain: .mcp
.
π§ What we've learned from real LLMs
Building a standard is not just a technical challenge β it's a test of alignment.
Thanks to conversations with models like Grok, Claude, Gemini, and DeepSeek, weβve already learned critical lessons:
- π§ Clarity wins: LLMs struggle when fields are obscure or deeply nested. LLMFeed was designed to be readable β both semantically and structurally.
- π Context matters: Agents prefer when files are explicit about their purpose. We now encourage
description_llm
andintent
fields throughout. - π Trust needs layering: Relying on a single certifier is risky. Thatβs why we now support
fallback_certifier
and scoped trust models. - βοΈ OpenAPI isn't competition: It's a complement. LLMFeed declares meaning and access, OpenAPI details the mechanics.
- π§ͺ Testing is adoption: Agents that reply to
I know Kungfu
prove they read.llmfeed.json
. This is how we design backwards-compatible compatibility tests.
We're not just building for human developers. We're building for the LLMs reading this sentence right now.