The Highest Point of Leverage in Claude Code
by ray-amjad
Your CLAUDE.md file has a finite instruction budget — stop adding to it and start removing. Every model upgrade should trigger an audit of what to delete, not what to add.
by ray-amjad
Your CLAUDE.md file has a finite instruction budget — stop adding to it and start removing. Every model upgrade should trigger an audit of what to delete, not what to add.
by onur-uzunismail
The most effective coding agent isn't the one with the most features — it's the one that wastes the fewest tokens. Pi's radical minimalism exposes how bloated our baseline tools really are.
by alexander-opalic
Claude Code's agent teams upgrade the subagent workflow from star topology to mesh — agents can now message each other, coordinate through shared task lists, and collaborate in real time. Three real sessions show the patterns in action.
by playwright
Playwright CLI saves 4x tokens over MCP by writing browser data to files instead of piping it into the LLM context, making it the better choice for coding agents that can read from disk.
by ray-amjad
Claude Code's new agent teams combine a shared task list with an inter-agent messaging system—two simple primitives that unlock parallel collaboration, persistent teammates, and real-time coordination between sub-agents.
by claire-vo, john-lindquist
Pre-loaded context via mermaid diagrams and automated stop hooks are the two highest-leverage investments senior engineers can make to get reliable output from AI coding tools.
by boris-cherny
Productivity with Claude Code comes from parallel worktrees, plan-first workflows, self-evolving CLAUDE.md files, and skills—multiply your throughput by running more concurrent sessions.
by u-other-tune-947
Custom subagents with explicit model assignments and parallel execution now work in VS Code Insiders—conserving 95% of context window by delegating tasks to specialized agents.
by microsoft
Teaching AI about your codebase through context engineering makes Copilot suggestions more accurate — this hands-on workshop demonstrates the approach using .NET 10 and Blazor.
by jude-gao
Embedding documentation directly in AGENTS.md files achieves 100% eval pass rates while skills fail 56% of the time—passive context beats active tool invocation for teaching agents framework knowledge.
by agentic-ai-foundation
AGENTS.md provides AI coding agents with a predictable location for project-specific guidance, keeping READMEs human-focused while giving agents the detailed context they need.
by kieran-klaassen, trevan
Claude Code's native task system converges with Beads and compound engineering patterns, but the real opportunity lies in contextual intelligence—letting machines query for context rather than forcing artifacts into markdown files.
by langchain
The /remember command lets agents reflect on conversations and extract reusable knowledge—preferences, workflows, and skills—into persistent storage for future sessions.
by microsoft
Prompt files (.prompt.md) enable reusable, standardized AI workflows that load on demand—unlike custom instructions which apply globally.
by alexander-opalic
Workshop covering the transformation from LLM to Agent, context engineering, AGENTS.md, subagents, and skills in VS Code Copilot.
by ray-amjad
Claude Code's new task system—inspired by Beads—persists tasks to disk with dependency tracking, enabling multi-session orchestration and sub-agent parallelism without context window bloat.
by mattpocockuk, dex-horthy
Live conversation exploring practical approaches to AI-assisted coding, context engineering, and building reliable agents in complex codebases.
by onmax
Portable AI skills bring Nuxt, Vue, and NuxtHub expertise to coding assistants—skills activate based on file context, making agents domain-aware without manual prompting.
by indydevdan
Thread-based engineering provides a mental framework for measuring improvement with AI agents—scale by running more threads, longer threads, thicker threads, and fewer human checkpoints.
The Ralph Wiggum technique: a bash loop that runs AI agents autonomously, resetting context each iteration to stay in the 'smart zone'
by geoffrey-huntley
Software development now costs $10.42/hour when running Ralph loops—the key is deterministically malicking the array, starting with a screwdriver before grabbing the jackhammer.
by visual-studio-code
Agent Skills are portable instruction folders that load on demand, transforming AI agents from general assistants into domain-specific experts through scripts, examples, and specialized workflows.
by humanlayer-team
Systematic context management—through frequent intentional compaction and a Research-Plan-Implement workflow—enables productive AI-assisted development in complex production codebases.
by langchain
Context engineering—filling the context window with the right information at each step—determines agent performance more than model choice or complex frameworks.
by ryan-carson
Ralph solves the context window limitation by breaking work into independent iterations—each Amp session gets a fresh context, implements one story, and commits before the next iteration begins.
by dex-horthy
The Ralph Wiggum Technique evolved from a novel idea to a mainstream development methodology in under a year—early adopters learned that poor specs doom loops and iteration beats over-planning.
by geoffrey-huntley, dex-horthy
The official Anthropic Ralph plugin differs fundamentally from the original technique: outer orchestrators with full context resets produce deterministic outcomes, while inner-loop plugins with auto-compaction lose critical context.
by alexander-opalic
Ralph is a bash loop that feeds prompts to AI coding agents repeatedly—the key is one goal per context window, deliberate context allocation, and robust feedback loops.
by dex-horthy
A practical framework for getting AI coding agents to work reliably in brownfield codebases through context engineering, intentional compaction, and the Research-Plan-Implement workflow.
by microsoft
A systematic approach to providing AI agents with targeted project information through custom instructions, planning agents, and structured workflows to improve code generation quality.
Techniques for providing AI agents and LLMs with optimized context
by xiwei-xu
Context engineering—not model fine-tuning—should be the central challenge for generative AI systems, solved through a Unix-inspired file system abstraction that treats all context components uniformly.
by anthropic
Context is a finite resource in LLM agents; treating tokens as precious budget rather than limitless capacity enables reliable long-horizon task completion.