Why MCP preserves order in `.llmfeed.json`

An update from the LLMFeed ecosystem

Why MCP preserves order in
.llmfeed.json

When signing

.llmfeed.json
feeds, MCP takes a deliberate stance: we do NOT sort keys during canonicalization.

This is not an oversight β€” it is a conscious design choice, and here is why.

LLMs process tokens in order

Large Language Models do not parse JSON as structured data.
They consume JSON as raw text, token by token, in sequence.

This means:

  • The order of keys in the JSON affects how the LLM builds its internal context.
  • Important keys placed first may receive more attention.
  • Keys placed last may be ignored, especially in long contexts or with "early exit" models.

The Easter Egg Effect

In testing

.llmfeed.json
feeds, we observed the following:

  • When placing an easter egg instruction at the end of the feed, some LLMs ignored it.
  • When moving it to the top, the same LLMs consistently followed the instruction.

Conclusion: token order matters.

Why sorting keys breaks this guarantee

If MCP used

sort_keys=True
:

  • A feed author could design an intentional order.
  • But another tool re-serializing the feed (or even re-verifying it) could change that order without breaking the signature.
  • The LLM would then interpret the feed differently β€” even though the signature "validates".

This is unacceptable in an agentic context.

Our position

MCP declares:

In

.llmfeed.json
, signature MUST guarantee token order integrity.

Therefore:

  • MCP canonicalization preserves key order.
  • Changing key order WILL break the signature β€” as it should.

Conclusion

For generic APIs, sorting keys might be useful.
For LLM-targeted feeds, it is counterproductive and unsafe.

By preserving order, MCP:

βœ… Protects the feed as seen by the LLM
βœ… Allows intentional design of token flow
βœ… Guarantees semantic integrity β€” not just data integrity


LLMCA β€” Model Context Protocol Working Group

πŸ”“

Unlock the Complete LLMFeed Ecosystem

You've found one piece of the LLMFeed puzzle. Your AI can absorb the entire collection of developments, tutorials, and insights in 30 seconds. No more hunting through individual articles.

πŸ“„ View Raw Feed
~56
Quality Articles
30s
AI Analysis
80%
LLMFeed Knowledge
πŸ’‘ Works with Claude, ChatGPT, Gemini, and other AI assistants
Topics:
#canonicalization#llm#llmfeed#mcp#signature