A lightweight, configurable reverse proxy for routing and load balancing MCP (Model Context Protocol) requests to appropriate backend services based on request content.
For detailed documentation, visit Catie MCP Documentation.
- Dynamic routing of MCP JSON-RPC requests based on tool call
- Concatenate tools from multiple different MCP Servers so the client gets a unified view of all tools without the user installing multiple servers.
- Session-aware routing to maintain client connections to the same backend
- Support for Streamable HTTP transport with SSE (Server-Sent Events)
- Tool name mapping and namespacing to resolve naming conflicts between different backends
- Prometheus metrics integration for observability
- Containerized deployment with Docker
- Basic authentication for monitoring UI
The application is structured into several packages:
cmd/main.go
- Application entry point with server setuppkg/config
- Configuration loading and managementpkg/router
- Request routing and proxy logicpkg/session
- Session management for maintaining client connectionspkg/logger
- Structured logging systempkg/ui
- Simple web UI for monitoring
The router is configured using a YAML file (router_config.yaml
). Here's an example configuration:
resources:
"^weather/.*": "http://weather-service:8080/mcp"
"^database/.*": "http://database-service:8080/mcp"
tools:
"^calculator$": "http://calculator-service:8080/mcp"
"^translator$": "http://translator-service:8080/mcp"
toolMappings:
- originalName: "weather"
targetName: "getWeather"
target: "http://weather-service:8080/mcp"
- originalName: "search"
targetName: "googleSearch"
target: "http://search-service:8080/mcp"
default: "http://default-service:8080/mcp"
ui:
username: "admin"
password: "your_secure_password"
The configuration consists of:
resources
: Regex patterns for resource URIs and their target endpointstools
: Regex patterns for tool names and their target endpointstoolMappings
: Mappings for tool name transformations to resolve naming conflictsdefault
: Fallback endpoint for requests that don't match any patternui
: Authentication credentials for the monitoring UI
The configuration file is automatically reloaded when changes are detected.
The tool name mapping feature allows you to present a unified tool interface to clients while handling naming differences across backend MCP servers. This is useful when:
- Different backends use different names for similar functionality
- You want to present a simplified or standardized naming scheme to clients
- You need to avoid naming conflicts between tools from different backends
For each tool mapping, specify:
originalName
: The name presented to clientstargetName
: The actual name expected by the backend servertarget
: The URL of the target backend server
When a client makes a tool call with the original name, MCProute automatically transforms it to the target name before forwarding the request to the appropriate backend.
- Go 1.18 or higher
- Docker (optional, for containerized deployment)
-
Clone the repository:
git clone https://github.com/mclenhard/mcp-router-proxy.git cd mcp-router-proxy
-
Build the application:
go build -o mcp-router-proxy ./cmd/main.go
-
Edit router_config.yaml to match your environment
-
Run the application:
./mcp-router-proxy
-
Build the Docker image:
docker build -t mcp-router-proxy .
-
Run the container:
docker run -p 80:80 -v $(pwd)/router_config.yaml:/root/router_config.yaml mcp-router-proxy
The proxy listens for MCP requests on the /mcp
endpoint. Requests are routed based on their method and parameters:
resources/read
requests are routed based on theuri
parametertools/call
requests are routed based on thename
parameter- Other requests are sent to the default endpoint
The proxy supports both GET and POST methods according to the MCP Streamable HTTP transport specification:
- POST requests are used to send JSON-RPC messages to the server
- GET requests are used to establish SSE streams for server-to-client communication
The proxy maintains session state by tracking the Mcp-Session-Id
header. When a client establishes a session with an MCP server through the proxy, subsequent requests with the same session ID are routed to the same backend server.
A health check endpoint is available at /health
which returns a 200 OK response when the service is running.
A simple monitoring UI is available at /stats
which shows request statistics and routing information. This interface is protected by basic authentication using the credentials specified in the configuration file.
The service exposes Prometheus-compatible metrics at the /metrics
endpoint. These metrics include:
mcp_router_requests_total
: Total number of requests processedmcp_router_errors_total
: Total number of request errorsmcp_router_requests_by_method
: Number of requests broken down by methodmcp_router_requests_by_endpoint
: Number of requests broken down by target endpointmcp_router_response_time_ms
: Average response time in milliseconds by methodmcp_router_uptime_seconds
: Time since the router started in seconds
You can configure Prometheus to scrape these metrics by adding the following to your Prometheus configuration:
scrape_configs:
- job_name: 'mcp-router'
scrape_interval: 15s
static_configs:
- targets: ['your-router-host:80']
This endpoint is also protected by the same basic authentication as the stats UI.
mcp-router-proxy/
├── cmd/
│ └── main.go
├── pkg/
│ ├── config/
│ │ └── config.go
│ ├── router/
│ │ └── router.go
│ ├── session/
│ │ └── store.go
│ ├── logger/
│ │ └── logger.go
│ └── ui/
│ └── ui.go
├── Dockerfile
├── go.mod
├── go.sum
├── README.md
└── router_config.yaml
- Fork the repository
- Create a feature branch
- Add your changes
- Submit a pull request
The following features are planned for upcoming releases:
Add SSE Support: Add support for SSE (Server-Sent Events) to the proxy- Complete Message Forwarding: Ensure all MCP message types (including roots and sampling) are properly forwarded without interference
- Intelligent Caching: Response caching with configurable TTL, cache invalidation, and support for memory and Redis backends
- Rate Limiting: Configurable rate limiting with multiple strategies, response headers, and distributed rate limiting support
- Circuit Breaking: Automatic detection of backend failures with fallback responses
- Request Transformation: Modify requests before forwarding to backends
- Response Transformation: Transform backend responses before returning to clients
Development priorities are based on community feedback. Please open an issue to request features or contribute to the roadmap discussion.
Contributions are welcome! Please feel free to submit a Pull Request.
For support, please open an issue in the GitHub repository or contact me at [email protected].