Skip to content

The Meta-Tool Pattern: Progressive Disclosure for MCP

Professor Synapse
Professor Synapse |

Series: Bounded Context Packs (Part 2 of 4)

"The best interface is no interface until you need it."

Article 1 established the problem: 33 tools consuming 8,000+ tokens before the first user message. The solution isn't fewer tools; it's smarter loading.

This article introduces the meta-tool pattern. Article 3 shows how to build it.


The Core Insight

Tool schemas are just JSON. They're not magic. They're structured descriptions of what a tool does, what parameters it accepts, what it returns.

Instead of registering 33 tools and hoping the model picks the right one, what if you registered two tools:

  • One whose description contains the complete capability index
  • One that executes any requested tool with unified context

The model sees the menu by seeing the first tool. It requests schemas for what it needs. The second tool handles execution.

This inverts the traditional flow:

Traditional Meta-Tool Pattern
Platform loads all tools Model sees capability index
Model sees everything Model requests specific schemas
Model picks from noise Model receives only what it asked for
29 schemas at startup 2 schemas at startup

The LLM becomes an active participant in tool selection, not a passive recipient of whatever the platform decided to load.


The Three-Layer Architecture

Every BCP implementation follows three layers:

Layer 1: Meta-Tools (Entry Points)

Two tools that the MCP client actually sees:

Tool Purpose
Discovery tool Lists all agents/tools in its description, returns schemas on request
Execution tool Runs tools with shared context

Everything else is internal. The model never sees 33 tool registrations, just 2.

Layer 2: Agents (Domain Containers)

Agents group related tools by bounded context. Content operations in one agent. Search in another. Storage in another. Memory in another.

Each agent is independent. Loading one doesn't require loading others. But they work together when a task spans domains.

Why these boundaries matter:

Content operations (read/write/update) have different mental models than structural operations (move/copy/archive). You think about "what's in this file" differently than "where should this file live."

Search is its own cognitive mode. When you're searching, you're exploring. When you're writing, you're creating. Different tools, different mindset.

The 7±2 guideline from cognitive science applies here. Each domain stays small enough that the model can better intuit what it needs.

Layer 3: Tools (Atomic Operations)

Individual operations within each agent. Each tool has a slug, a schema, and an execute method. Tools know nothing about the meta-layer. They just do their job.

Synaptic Labs AI education attribution required

Why Progressive Disclosure Works

Cognitive Load Reduction

The model isn't choosing between 33 similar-sounding tools. It's:

  1. Seeing 6 domain categories
  2. Picking the relevant domain
  3. Requesting specific tools within that domain
  4. Working with a focused, task-appropriate toolset

Decision quality improves when options are organized hierarchically.

Token Efficiency

Approach Startup Cost Per-Task Cost
Traditional (33 tools) ~8,000 tokens +0 tokens
Meta-Tool (2 tools) ~600 tokens ~150 tokens/tool requested

For tasks using 3-5 tools, meta-tool wins. For tasks needing all 33 tools simultaneously (rare), traditional wins.

In practice, most tasks touch 2-5 tools from 1-2 domains.

Local Model Compatibility

An 8K context model can't load 7,000 tokens of tool schemas and still have room for conversation. The meta-tool pattern makes local-first AI viable:

  • 600 tokens at startup
  • 150 tokens per tool loaded
  • 5-tool task: 1,350 tokens total
  • Leaves 6,650 tokens for actual work

Extension Without Bloat

Adding a new tool:

  1. Create the tool class
  2. Register it with its parent agent
  3. Done

The discovery tool automatically includes it. No configuration files to update. No schema manifests to regenerate.


Anthropic Said the Same Thing

In November 2025, Anthropic published "Building more efficient agents" describing essentially the same pattern:

"We present tools as a filesystem that the model can explore and understand incrementally."

Their results:

  • 98.7% reduction in tool overhead
  • "Skills" as reusable capability bundles
  • Progressive disclosure as the core architecture

Around this time, Claude Skills launched. Production implementation of bounded capability packs. Skills load contextually based on task type.

When you arrive at an architecture independently and then see the platform vendor implement the same pattern, you're probably onto something real.


The Constraint-First Principle

Here's the design philosophy that drove the architecture:

If it works at 8K context, it works everywhere.

Modern cloud models offer 128K to 1M tokens. You could load 29 tool schemas and barely notice. But:

  • Local models run 4K to 32K context
  • Cost scales with token usage
  • Speed degrades with context size
  • Edge deployments have hard limits

Design for the most constrained environment. The architecture works everywhere else automatically.


The Pattern in Practice

The pattern needs two things to work:

  1. A capability index in the discovery tool's description: The model sees available agents and tools without making a discovery call
  2. Context-first execution: Every tool call captures memory, goal, and constraints for session tracking

The discovery tool's description becomes the menu:

Agents:
canvasManager: [read, write, update, list]
contentManager: [read, update, write]
storageManager: [list, move, copy, archive, ...]
searchManager: [searchContent, searchDirectory, searchMemory]
memoryManager: [createWorkspace, loadWorkspace, ...]
promptManager: [executePrompts, listModels, ...]

The model reads this index, requests schemas for the tools it needs, and executes them with unified context.


What's Next

This article explained the pattern and why it works. Article 3 opens the hood:

  • The actual implementation code
  • Agent registration and tool discovery
  • Schema stripping to save tokens
  • A complete request flow through the system

The theory is sound. Let's see how it's built.


Frequently Asked Questions

What is the meta-tool pattern in MCP?

The meta-tool pattern uses two registered tools to provide access to many capabilities. Instead of loading 29 tool schemas at startup, you load a discovery tool and an execution tool. The discovery tool's description lists all available agents and tools. When the LLM needs specific tools, it requests their schemas, then executes them through the execution tool. This reduces token overhead by 85-95%.

How do I implement progressive disclosure for MCP tools?

Three layers: (1) two meta-tools for discovery and execution, (2) domain-organized agents that group related tools, (3) individual tools within each agent. The LLM sees the overview in the discovery tool's description, requests specific schemas, and executes through the execution tool.

What are bounded contexts in MCP architecture?

Logical groupings of related tools. A canvas agent handles canvas operations. A content agent handles file read/write. A search agent handles discovery. A memory agent handles state. Each agent is independent: loading one doesn't require loading others. This prevents capability sprawl from becoming performance sprawl.

What's the token savings from the meta-tool pattern?

Typically 85-95%. Loading 33 tools at ~250 tokens each costs ~8,250 tokens. The meta-tool pattern: two tool schemas (~600 tokens) plus only the specific schemas you request (~150 tokens per tool). For a typical 3-tool task, that's ~1,050 tokens vs ~8,250.

Does the meta-tool pattern work with local LLMs?

Yes, and this is a key advantage. Local models often have 4K-32K context windows. The meta-tool pattern keeps baseline overhead minimal (~600 tokens), making sophisticated tool integrations viable on constrained hardware.

How do Claude Skills relate to bounded context packs?

Claude Skills are Anthropic's production implementation of the same pattern. Skills are domain-organized capability packs that load based on task context. This validates the bounded context approach at platform scale.


Previous: The Tool Bloat Tipping Point

Next: From Theory to Production

Share this post