Skip to main content
articleJanuary 27, 2026

AGENTS.md Outperforms Skills in Vercel Agent Evals

Embedding documentation directly in AGENTS.md files achieves 100% eval pass rates while skills fail 56% of the time—passive context beats active tool invocation for teaching agents framework knowledge.

Key Findings

Vercel tested how to best teach AI coding agents Next.js 16 APIs. The results were decisive:

ApproachPass Rate
No docs (baseline)53%
Skills (default)53%
Skills with explicit instructions79%
AGENTS.md docs index100%

Skills failed to be invoked in 56% of eval cases, even when available. Different instruction wordings produced dramatically different results—a fragility problem.

Why Passive Context Wins

Three factors explain why embedded documentation outperforms skills:

  1. No decision point required — information stays available without the agent needing to decide to fetch it
  2. Consistent availability — present in every turn's system prompt
  3. No sequencing complications — avoids the "explore first vs invoke first" dilemma

Implementation

Vercel compressed 40KB of documentation into an 8KB index using pipe-delimited formatting:

[Next.js Docs Index]|root: ./.next-docs
|IMPORTANT: Prefer retrieval-led reasoning over pre-training-led reasoning

An 80% size reduction while maintaining 100% eval performance.

Quick Setup

npx @next/codemod@canary agents-md

This detects your Next.js version, downloads matching documentation to .next-docs/, and injects the compressed index into AGENTS.md.

Implications

The research suggests framework maintainers should provide AGENTS.md snippets rather than waiting for skill-based approaches to mature. For developers using AI coding tools, passive context currently delivers more reliable results than relying on agent tool invocation.

Connections