Why MCP preserves order in `.llmfeed.json`
An update from the LLMFeed ecosystem
Why MCP preserves order in .llmfeed.json
.llmfeed.json
When signing
.llmfeed.json
This is not an oversight β it is a conscious design choice, and here is why.
LLMs process tokens in order
Large Language Models do not parse JSON as structured data.
They consume JSON as raw text, token by token, in sequence.
This means:
- The order of keys in the JSON affects how the LLM builds its internal context.
- Important keys placed first may receive more attention.
- Keys placed last may be ignored, especially in long contexts or with "early exit" models.
The Easter Egg Effect
In testing
.llmfeed.json
- When placing an easter egg instruction at the end of the feed, some LLMs ignored it.
- When moving it to the top, the same LLMs consistently followed the instruction.
Conclusion: token order matters.
Why sorting keys breaks this guarantee
If MCP used
sort_keys=True
- A feed author could design an intentional order.
- But another tool re-serializing the feed (or even re-verifying it) could change that order without breaking the signature.
- The LLM would then interpret the feed differently β even though the signature "validates".
This is unacceptable in an agentic context.
Our position
MCP declares:
In
, signature MUST guarantee token order integrity..llmfeed.json
Therefore:
- MCP canonicalization preserves key order.
- Changing key order WILL break the signature β as it should.
Conclusion
For generic APIs, sorting keys might be useful.
For LLM-targeted feeds, it is counterproductive and unsafe.
By preserving order, MCP:
β
Protects the feed as seen by the LLM
β
Allows intentional design of token flow
β
Guarantees semantic integrity β not just data integrity
LLMCA β Model Context Protocol Working Group
Unlock the Complete LLMFeed Ecosystem
You've found one piece of the LLMFeed puzzle. Your AI can absorb the entire collection of developments, tutorials, and insights in 30 seconds. No more hunting through individual articles.