-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Description
What feature would you like to see?
Add MCP Client Sampling support to Codex, following the MCP Specification.
What this enables:
This allows MCP servers to call AI models through the CLI client, transforming simple MCP tools into intelligent agents that can reason and make decisions.
Practical Example - Background Code Style Checker:
While coding, you start an MCP style checker tool that returns immediately, then works silently in the background.
It monitors file saves and uses AI (via sampling) to analyze violations in context - understanding your project's conventions and complexity patterns.
The analysis runs asynchronously without blocking your workflow, using your existing OpenAI subscription and requiring no separate API keys.
Technical Requirements:
- Declare
sampling: {}
capability when connecting to MCP servers - Handle
sampling/createMessage
requests from MCP servers - Support model preferences (hints, cost/speed/intelligence priorities)
- Forward requests to appropriate OpenAI models
Benefits:
- Users don't need separate API keys for each MCP server
- Users control which model is used (balancing cost vs intelligence)
- Enables context-aware MCP tools that can perform complex reasoning
- Provides feature parity with VS Code (which already supports this)
Additional information
This feature is part of the MCP specification and would unlock a new ecosystem of intelligent MCP tools for codex users.
For reference, VS Code already supports this feature with GitHub Copilot, allowing MCP servers to leverage the user's existing AI subscription through sampling requests: