This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain.js and LangGraph.js.
-
🔌 Transport Options
- Connect to MCP servers via stdio (local) or Streamable HTTP (remote)
- Streamable HTTP automatically falls back to SSE for compatibility with legacy MCP server implementations
- Support for custom headers in SSE connections for authentication
- Configurable reconnection strategies for both transport types
- Connect to MCP servers via stdio (local) or Streamable HTTP (remote)
-
🔄 Multi-Server Management
- Connect to multiple MCP servers simultaneously
- Auto-organize tools by server or access them as a flattened collection
-
🧩 Agent Integration
- Compatible with LangChain.js and LangGraph.js
- Optimized for OpenAI, Anthropic, and Google models
- Supports rich content responses including text, images, and embedded resources
-
🛠️ Development Features
- Uses
debug
package for debug logging - Flexible configuration options
- Robust error handling
- Uses
npm install @langchain/mcp-adapters
The library allows you to connect to one or more MCP servers and load tools from them, without needing to manage your own MCP client instances.
import { MultiServerMCPClient } from "@langchain/mcp-adapters";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
// Create client and connect to server
const client = new MultiServerMCPClient({
// Global tool configuration options
// Whether to throw on errors if a tool fails to load (optional, default: true)
throwOnLoadError: true,
// Whether to prefix tool names with the server name (optional, default: true)
prefixToolNameWithServerName: true,
// Optional additional prefix for tool names (optional, default: "mcp")
additionalToolNamePrefix: "mcp",
// Use standardized content block format in tool outputs
useStandardContentBlocks: true,
// Server configuration
mcpServers: {
// adds a STDIO connection to a server named "math"
math: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
// Restart configuration for stdio transport
restart: {
enabled: true,
maxAttempts: 3,
delayMs: 1000,
},
},
// here's a filesystem server
filesystem: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem"],
},
// Sreamable HTTP transport example, with auth headers and automatic SSE fallback disabled (defaults to enabled)
weather: {
url: "https://example.com/weather/mcp",
headers: {
Authorization: "Bearer token123",
}
automaticSSEFallback: false
},
// OAuth 2.0 authentication (recommended for secure servers)
"oauth-protected-server": {
url: "https://protected.example.com/mcp",
authProvider: new MyOAuthProvider({
// Your OAuth provider implementation
redirectUrl: "https://myapp.com/oauth/callback",
clientMetadata: {
redirect_uris: ["https://myapp.com/oauth/callback"],
client_name: "My MCP Client",
scope: "mcp:read mcp:write"
}
}),
// Can still include custom headers for non-auth purposes
headers: {
"User-Agent": "My-MCP-Client/1.0"
}
},
// how to force SSE, for old servers that are known to only support SSE (streamable HTTP falls back automatically if unsure)
github: {
transport: "sse", // also works with "type" field instead of "transport"
url: "https://example.com/mcp",
reconnect: {
enabled: true,
maxAttempts: 5,
delayMs: 2000,
},
},
},
});
const tools = await client.getTools();
// Create an OpenAI model
const model = new ChatOpenAI({
modelName: "gpt-4o",
temperature: 0,
});
// Create the React agent
const agent = createReactAgent({
llm: model,
tools,
});
// Run the agent
try {
const mathResponse = await agent.invoke({
messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
});
console.log(mathResponse);
} catch (error) {
console.error("Error during agent execution:", error);
// Tools throw ToolException for tool-specific errors
if (error.name === "ToolException") {
console.error("Tool execution failed:", error.message);
}
}
await client.close();
This example shows how you can manage your own MCP client and use it to get LangChain tools. These tools can be used anywhere LangChain tools are used, including with LangGraph prebuilt agents, as shown below.
The example below requires some prerequisites:
npm install @langchain/mcp-adapters @langchain/langgraph @langchain/core @langchain/openai
export OPENAI_API_KEY=<your_api_key>
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { loadMcpTools } from "@langchain/mcp-adapters";
// Initialize the ChatOpenAI model
const model = new ChatOpenAI({ modelName: "gpt-4" });
// Automatically starts and connects to a MCP reference server
const transport = new StdioClientTransport({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
});
// Initialize the client
const client = new Client({
name: "math-client",
version: "1.0.0",
});
try {
// Connect to the transport
await client.connect(transport);
// Get tools with custom configuration
const tools = await loadMcpTools("math", client, {
// Whether to throw errors if a tool fails to load (optional, default: true)
throwOnLoadError: true,
// Whether to prefix tool names with the server name (optional, default: false)
prefixToolNameWithServerName: false,
// Optional additional prefix for tool names (optional, default: "mcp")
additionalToolNamePrefix: "mcp",
// Use standardized content block format in tool outputs
useStandardContentBlocks: true,
});
// Create and run the agent
const agent = createReactAgent({ llm: model, tools });
const agentResponse = await agent.invoke({
messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
});
console.log(agentResponse);
} catch (e) {
console.error(e);
} finally {
// Clean up connection
await client.close();
}
For more detailed examples, see the examples directory.
[!TIP] The
useStandardContentBlocks
defaults tofalse
for backward compatibility, however we recommend setting it totrue
for new applications, as this will likely become the default in a future release.
When loading MCP tools either directly through loadMcpTools
or via MultiServerMCPClient
, you can configure the following options:
Option | Type | Default | Description |
---|---|---|---|
throwOnLoadError |
boolean |
true |
Whether to throw an error if a tool fails to load |
prefixToolNameWithServerName |
boolean |
true |
If true, prefixes all tool names with the server name (e.g., serverName__toolName ) |
additionalToolNamePrefix |
string |
"mcp" |
Additional prefix to add to tool names (e.g., prefix__serverName__toolName ) |
useStandardContentBlocks |
boolean |
false |
See Tool Output Mapping; set true for new applications |
outputHandling |
"content" , "artifact" , or object
|
resource -> "artifact" , all others -> "content"
|
See Tool Output Mapping |
[!TIP] This section is important if you are working with multimodal tools, tools that produce embedded resources, or tools that produce large outputs that you may not want to be included in LLM input context. If you are writing a new application that only works with tools that produce simple text or JSON output, we recommend setting
useStandardContentBlocks
totrue
and leavingoutputHandling
undefined (will use defaults).
MCP tools return arrays of content blocks. A content block can contain text, an image, audio, or an embedded resource. The right way to map these outputs into LangChain ToolMessage
objects can differ based on the needs of your application, which is why we introduced the useStandardContentBlocks
and outputHandling
configuration options.
The useStandardContentBlocks
field determines how individual MCP content blocks are transformed into a structure recognized by LangChain ChatModel providers (e.g. ChatOpenAI
, ChatAnthropic
, etc). The outputHandling
field allows you to specify whether a given type of content should be sent to the LLM, or set aside for some other part of your application to use in some future processing step (e.g. to use a dataframe from a database query in a code execution environment).
In @langchain/core
version 0.3.48 we created a new set of content block types that offer a standardized structure for multimodal inputs. As you might guess from the name, the useStandardContentBlocks
setting determines whether @langchain/mcp-adapters
converts tool outputs to this format. For backward compatibility with older versions of @langchain/mcp-adapters
, it also determines whether tool message artifacts are converted. See the conversion rules below for more info.
[!IMPORTANT]
ToolMessage.content
andToolMessage.artifact
will always be arrays of content block objects as described by the rules below, except in one special case. When theoutputHandling
option routestext
output to theToolMessage.content
field and the only content block produced by a tool call is atext
block,ToolMessage.content
will be astring
containing the text content produced by the tool.
When useStandardContentBlocks
is true
(recommended for new applications):
-
Text: Returned as
StandardTextBlock
objects. -
Images: Returned as base64
StandardImageBlock
objects. -
Audio: Returned as base64
StandardAudioBlock
objects. -
Embedded Resources: Returned as
StandardFileBlock
, with asource_type
oftext
orbase64
depending on whether the resource was binary or text. URI resources are fetched eagerly from the server and the results of the fetch are returned following these same rules. We treat all embedded resource URIs as resolvable by the server, and we do not attempt to fetch external URIs.
When useStandardContentBlocks
is false
(default for backward compatibility):
- Tool outputs routed to
ToolMessage.artifact
(controlled by theoutputHandling
option):-
Embedded Resources: Embedded resources containing only a URI are fetched eagerly from the server and the results of the fetch operation are stored in the artifact array without transformation. Otherwise embedded resources are stored in the
artifact
array in their original MCP content block structure without modification. -
All other content types: Stored in the
artifact
array in their original MCP content block structure without modification.
-
Embedded Resources: Embedded resources containing only a URI are fetched eagerly from the server and the results of the fetch operation are stored in the artifact array without transformation. Otherwise embedded resources are stored in the
- Tool outputs routed to the
ToolMessage.content
array (controlled by theoutputHandling
option):-
Text: Returned as
MessageContentText
objects, unless it is the only content block in the output, in which case it's assigned directly toToolMessage.content
as astring
. -
Images: Returned as
MessageContentImageUrl
objects with base64 data URLs (data:image/png;base64,<data>
) -
Audio: Returned as
StandardAudioBlock
objects. -
Embedded Resources: Returned as
StandardFileBlock
, with asource_type
oftext
orbase64
depending on whether the resource was binary or text. URI resources are fetched eagerly from the server and the results of the fetch are returned following these same rules. We treat all embedded resource URIs as resolvable by the server, and we do not attempt to fetch external URIs.
-
Text: Returned as
The outputHandling
option allows you to determine which tool output types are assigned to ToolMessage.content
, and which are assigned to ToolMessage.artifact
. Data in ToolMessage.content
is used as input context when the LLM is invoked, while ToolMessage.artifact
is not.
By default @langchain/mcp-adapters
maps MCP resource
content blocks to ToolMessage.artifact
, and maps all other MCP content block types to ToolMessage.content
. The value of useStandardContentBlocks
determines how the structure of each content block is transformed during this process.
[!TIP] Examples where
ToolMessage.artifact
can be useful include cases when you need to send multimodal tool outputs viaHumanMessage
orSystemMessage
because the LLM provider API doesn't accept multimodal tool outputs, or cases where one tool might produce a large output to be indirectly manipulated by some other tool (e.g. a query tool that loads dataframes into a Python code execution environment).
The outputHandling
option can be assigned to "content"
, "artifact"
, or an object that maps MCP content block types to either content
or artifact
.
When working with MultiServerMCPClient
, the outputHandling
field can be assigned to the top-level config object and/or to individual server entries in mcpServers
. Entries in mcpServers
override those in the top-level config, and entries in the top-level config override the defaults.
For example, consider the following configuration:
const clientConfig = {
useStandardContentBlocks: true,
outputHandling: {
image: "artifact",
audio: "artifact",
},
mcpServers: {
camera-server: {
url: "...",
outputHandling: {
image: content
},
},
microphone: {
url: "...",
outputHandling: {
audio: content
},
},
},
}
When calling tools from the camera
MCP server, the following outputHandling
config will be used:
{
text: "content", // default
image: "content", // default and top-level config overridden by "camera" server config
audio: "artifact", // default overridden by top-level config
resource: "artifact", // default
}
Similarly, when calling tools on the microphone
MCP server, the following outputHandling
config will be used:
{
text: "content", // default
image: "artifact", // default overridden by top-level config
audio: "content", // default and top-level config overridden by "microphone" server config
resource: "artifact", // default
}
MCP tools support timeout configuration through LangChain's standard RunnableConfig
interface. This allows you to set custom timeouts on a per-tool-call basis:
const client = new MultiServerMCPClient({
mcpServers: {
'data-processor': {
command: 'python',
args: ['data_server.py']
},
},
useStandardContentBlocks: true,
});
const tools = await client.getTools();
const slowTool = tools.find(t => t.name.includes('process_large_dataset'));
// You can use withConfig to set tool-specific timeouts before handing
// the tool off to a LangGraph ToolNode or some other part of your
// application
const slowToolWithTimeout = slowTool.withConfig({ timeout: 300000 }); // 5 min timeout
// This invocation will respect the 5 minute timeout
const result = await slowToolWithTimeout.invoke(
{ dataset: 'huge_file.csv' },
);
// or you can invoke directly without withConfig
const directResult = await slowTool.invoke(
{ dataset: 'huge_file.csv' },
{ timeout: 300000 }
);
// Quick timeout for fast operations
const quickResult = await fastTool.invoke(
{ query: 'simple_lookup' },
{ timeout: 5000 } // 5 seconds
);
// Default timeout (60 seconds from MCP SDK) when no config provided
const normalResult = await tool.invoke({ input: 'normal_processing' });
Timeouts can be configured using the following RunnableConfig
fields:
Parameter | Type | Default | Description |
---|---|---|---|
timeout |
number | 60000 | Timeout in milliseconds for the tool call |
signal |
AbortSignal | undefined | An AbortSignal that, when asserted, will cancel the tool call |
For secure MCP servers that require OAuth 2.0 authentication, you can use the authProvider
option instead of manually managing headers. This provides automatic token refresh, error handling, and standards-compliant OAuth flows.
New in v0.4.6.
import type { OAuthClientProvider } from "@modelcontextprotocol/sdk/client/auth.js";
class MyOAuthProvider implements OAuthClientProvider {
constructor(private config: {
redirectUrl: string;
clientMetadata: OAuthClientMetadata;
}) {}
get redirectUrl() { return this.config.redirectUrl; }
get clientMetadata() { return this.config.clientMetadata; }
// Implement token storage (localStorage, database, etc.)
tokens(): OAuthTokens | undefined {
const stored = localStorage.getItem("mcp_tokens");
return stored ? JSON.parse(stored) : undefined;
}
async saveTokens(tokens: OAuthTokens): Promise<void> {
localStorage.setItem("mcp_tokens", JSON.stringify(tokens));
}
// Implement other required methods...
// See MCP SDK documentation for complete examples
}
const client = new MultiServerMCPClient({
mcpServers: {
"secure-server": {
url: "https://secure-mcp-server.example.com/mcp",
authProvider: new MyOAuthProvider({
redirectUrl: "https://myapp.com/oauth/callback",
clientMetadata: {
redirect_uris: ["https://myapp.com/oauth/callback"],
client_name: "My MCP Client",
scope: "mcp:read mcp:write"
}
})
},
},
useStandardContentBlocks: true,
});
The authProvider
automatically handles:
- ✅ Token Refresh: Automatically refreshes expired access tokens using refresh tokens
- ✅ 401 Error Recovery: Automatically retries requests after successful authentication
- ✅ PKCE Security: Uses Proof Key for Code Exchange for enhanced security
- ✅ Standards Compliance: Follows OAuth 2.0 and RFC 6750 specifications
- ✅ Transport Compatibility: Works with both StreamableHTTP and SSE transports
Aspect | OAuth Provider | Manual Headers |
---|---|---|
Token Refresh | ✅ Automatic | ❌ Manual implementation required |
401 Handling | ✅ Automatic retry | ❌ Manual error handling required |
Security | ✅ PKCE, secure flows | |
Standards | ✅ RFC 6750 compliant | |
Complexity | ✅ Simple configuration | ❌ Complex implementation |
Recommendation: Use authProvider
for production OAuth servers, and headers
only for simple token-based auth or debugging.
Both transport types support automatic reconnection:
{
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
restart: {
enabled: true, // Enable automatic restart
maxAttempts: 3, // Maximum restart attempts
delayMs: 1000 // Delay between attempts in ms
}
}
{
transport: "sse",
url: "https://example.com/mcp-server",
headers: { "Authorization": "Bearer token123" },
reconnect: {
enabled: true, // Enable automatic reconnection
maxAttempts: 5, // Maximum reconnection attempts
delayMs: 2000 // Delay between attempts in ms
}
}
The library provides different error types to help with debugging:
- MCPClientError: For client connection and initialization issues
- ToolException: For errors during tool execution
- ZodError: For configuration validation errors (invalid connection settings, etc.)
Example error handling:
try {
const client = new MultiServerMCPClient({
mcpServers: {
math: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
},
},
useStandardContentBlocks: true,
});
const tools = await client.getTools();
const result = await tools[0].invoke({ expression: "1 + 2" });
} catch (error) {
if (error.name === "MCPClientError") {
// Handle connection issues
console.error(`Connection error (${error.serverName}):`, error.message);
} else if (error.name === "ToolException") {
// Handle tool execution errors
console.error("Tool execution failed:", error.message);
} else if (error.name === "ZodError") {
// Handle configuration validation errors
console.error("Configuration error:", error.issues);
// Zod errors contain detailed information about what went wrong
error.issues.forEach((issue) => {
console.error(`- Path: ${issue.path.join(".")}, Error: ${issue.message}`);
});
} else {
// Handle other errors
console.error("Unexpected error:", error);
}
}
The library uses Zod for validating configuration. Here are some common validation errors:
-
Missing required parameters: For example, omitting
command
for stdio transport orurl
for SSE transport - Invalid parameter types: For example, providing a number where a string is expected
- Invalid connection configuration: For example, using an invalid URL format for SSE transport
Example Zod error for an invalid SSE URL:
{
"issues": [
{
"code": "invalid_string",
"validation": "url",
"path": ["mcpServers", "weather", "url"],
"message": "Invalid url"
}
],
"name": "ZodError"
}
This package makes use of the debug package for debug logging.
Logging is disabled by default, and can be enabled by setting the DEBUG
environment variable as per
the instructions in the debug package.
To output all debug logs from this package:
DEBUG='@langchain/mcp-adapters:*'
To output debug logs only from the client
module:
DEBUG='@langchain/mcp-adapters:client'
To output debug logs only from the tools
module:
DEBUG='@langchain/mcp-adapters:tools'
MIT
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Contributions are welcome! Please check out our contributing guidelines for more information.