A simple, lightweight library intended to simplify the use of Model Context Protocol (MCP) server tools with LangChain.
Its simplicity and extra features, such as tools schema adjustments for LLM compatibility and tools invocation logging, make it a useful basis for your experiments and customizations. However, it only supports text results of tool calls and does not support MCP features other than Tools.
LangChain's official LangChain.js MCP Adapters library, which supports comprehensive integration with LangChain, has been released. You may want to consider using it if you don't have specific needs for this library.
This package is intended to simplify the use of Model Context Protocol (MCP) server tools with LangChain / TypeScript.
Model Context Protocol (MCP) is the de facto industry standard that dramatically expands the scope of LLMs by enabling the integration of external tools and resources, including DBs, Cloud Storages, GitHub, Docker, Slack, and more. There are [quite a few useful MCP servers already available. See MCP Server Listing on the Official Site.
This utility's goal is to make these numerous MCP servers easily accessible from LangChain.
It contains a utility function convertMcpToLangchainTools()
.
This async function handles parallel initialization of specified multiple MCP servers
and converts their available tools into an array of LangChain-compatible tools.
It also performs LLM provider-specific schema transformations
to prevent schema compatibility issues
For detailed information on how to use this library, please refer to the following document:
"Supercharging LangChain: Integrating 2000+ MCP with ReAct".
A Python equivalent of this utility is available
here
npm i @h1deya/langchain-mcp-tools
Can be found here
A minimal but complete working usage example can be found in this example in the langchain-mcp-tools-ts-usage repo
convertMcpToLangchainTools()
utility function accepts MCP server configurations
that follow the same structure as
Claude for Desktop,
but only the contents of the mcpServers
property,
and is expressed as a JS Object, e.g.:
const mcpServers: McpServersConfig = {
filesystem: {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
},
fetch: {
command: "uvx",
args: ["mcp-server-fetch"]
},
github: {
type: "http",
url: "https://api.githubcopilot.com/mcp/",
headers: {
"Authorization": `Bearer ${process.env.GITHUB_PERSONAL_ACCESS_TOKEN}`
}
},
};
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers, {
// Perform provider-specific JSON schema transformations to prevent schema compatibility issues
llmProvider: "google_gemini"
// llmProvider: "openai"
// llmProvider: "anthropic"
}
);
This utility function initializes all specified MCP servers in parallel,
and returns LangChain Tools
(tools: StructuredTool[]
)
by gathering available MCP tools from the servers,
and by wrapping them into LangChain tools.
When llmProvider
option is specified, it performs LLM provider-specific schema transformations
for MCP tools to prevent schema compatibility issues.
Set this option when you enconter schema related warnings/errors while execution.
See below for details.
It also returns an async callback function (cleanup: McpServerCleanupFn
)
to be invoked to close all MCP server sessions when finished.
The returned tools can be used with LangChain, e.g.:
// import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const llm = new ChatGoogleGenerativeAI({ model: "gemini-2.5-flash" });
// import { createReactAgent } from "@langchain/langgraph/prebuilt";
const agent = createReactAgent({
llm,
tools
});
A minimal but complete working usage example can be found in this example in the langchain-mcp-tools-ts-usage repo
For hands-on experimentation with MCP server integration, try this MCP Client CLI tool built with this library
See README_DEV.md for details.
This library supports MCP Protocol version 2025-03-26 and maintains backwards compatibility with version 2024-11-05. It follows the official MCP specification for transport selection and backwards compatibility.
response_format: 'content'
(the default), which only supports text strings.
While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.PATH
environment variable to stdio server configrations if not explicitly provided to ensure servers can find required executables.stderr
Redirection for Local MCP ServerA new key "stderr"
has been introduced to specify a file descriptor
to which local (stdio) MCP server's stderr is redirected.
The key name stderr
is derived from
TypeScript SDK's StdioServerParameters
.
const logPath = `mcp-server-${serverName}.log`;
const logFd = fs.openSync(logPath, "w");
mcpServers[serverName].stderr = logFd;
A usage example can be found here
The working directory that is used when spawning a local (stdio) MCP server
can be specified with the "cwd"
key as follows:
"local-server-name": {
command: "...",
args: [...],
cwd: "/working/directory" // the working dir to be use by the server
},
The key name cwd
is derived from
TypeScript SDK's StdioServerParameters
.
The library selects transports using the following priority order:
This ensures predictable behavior while allowing flexibility for different deployment scenarios.
mcp_servers
configuration for Streamable HTTP, SSE and Websocket servers are as follows:
// Auto-detection: tries Streamable HTTP first, falls back to SSE on 4xx errors
"auto-detect-server": {
url: `http://${server_host}:${server_port}/...`
},
// Explicit Streamable HTTP
"streamable-http-server": {
url: `http://${server_host}:${server_port}/...`,
transport: "streamable_http"
// type: "http" // VSCode-style config also works instead of the above
},
// Explicit SSE
"sse-server-name": {
url: `http://${sse_server_host}:${sse_server_port}/...`,
transport: "sse" // or `type: "sse"`
},
// WebSocket
"ws-server-name": {
url: `ws://${ws_server_host}:${ws_server_port}/...`
// optionally `transport: "ws"` or `type: "ws"`
},
For the convenience of adding authorization headers, the following shorthand expression is supported.
This header configuration will be overridden if either streamableHTTPOptions
or sseOptions
is specified (details below).
github: {
// To avoid auto protocol fallback, specify the protocol explicitly when using authentication
type: "http", // or `transport: "http",`
url: "https://api.githubcopilot.com/mcp/",
headers: {
"Authorization": `Bearer ${process.env.GITHUB_PERSONAL_ACCESS_TOKEN}`
}
},
NOTE: When accessing the GitHub MCP server, GitHub PAT (Personal Access Token) alone is not enough; your GitHub account must have an active Copilot subscription or be assigned a Copilot license through your organization.
Auto-detection behavior (default):
transport
, the library follows MCP specification recommendationsExplicit transport selection:
transport: "streamable_http"
(or VSCode-style config type: "http"
) to force Streamable HTTP (no fallback)transport: "sse"
to force SSE transportws://
or wss://
) always use WebSocket transportStreamable HTTP is the modern MCP transport that replaces the older HTTP+SSE transport. According to the official MCP documentation: "SSE as a standalone transport is deprecated as of protocol version 2025-03-26. It has been replaced by Streamable HTTP, which incorporates SSE as an optional streaming mechanism."
The library supports OAuth 2.1 authentication for Streamable HTTP connections:
import { OAuthClientProvider } from '@modelcontextprotocol/sdk/client/auth.js';
// Implement your own OAuth client provider
class MyOAuthProvider implements OAuthClientProvider {
// Implementation details...
}
const mcpServers = {
"secure-streamable-server": {
url: "https://secure-mcp-server.example.com/mcp",
// To avoid auto protocol fallback, specify the protocol explicitly when using authentication
transport: "streamable_http", // or `type: "http",`
streamableHTTPOptions: {
// Provide an OAuth client provider
authProvider: new MyOAuthProvider(),
// Optionally customize HTTP requests
requestInit: {
headers: {
'X-Custom-Header': 'custom-value'
}
},
// Optionally configure reconnection behavior
reconnectionOptions: {
maxReconnectAttempts: 5,
reconnectDelay: 1000
}
}
}
};
Test implementations are provided:
Can be found here
logLevel: "debug"
to see detailed connection and execution logsstderr
redirection to capture server error outputDifferent LLM providers have incompatible JSON Schema requirements for function calling:
.optional()
+ .nullable()
)
for function calling (based on Structured Outputs API requirements,
strict enforcement coming in future SDK versions)"$defs
references, requires strict OpenAPI 3.0 subset complianceNote: Google Vertex AI provides OpenAI-compatible endpoints that support nullable fields.
This creates challenges for developers trying to create universal schemas across providers.
Many MCP servers generate schemas that don't satisfy all providers' requirements.
For example, the official Notion MCP server @notionhq/notion-mcp-server (as of Jul 2, 2025) produces:
OpenAI Warnings:
Zod field at `#/definitions/API-get-users/properties/start_cursor` uses `.optional()` without `.nullable()` which is not supported by the API. See: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#all-fields-must-be-required
... followed by many more
Gemini Errors:
GoogleGenerativeAIFetchError: [GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent: [400 Bad Request] * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[children].items.properties[paragraph].properties[rich_text].items.properties[mention].any_of[0].required: only allowed for OBJECT type
... followed by many more
The new option, llmProvider
, has been introduced for performing provider-specific JSON schema transformations:
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers, {
llmProvider: "openai" // Makes optional fields nullable
// llmProvider: "google_gemini" // Applies Gemini's strict validation rules
// llmProvider: "anthropic" // No transformations needed
}
);
Features:
llmProvider
is specifiedProvider | Transformations Applied |
---|---|
openai |
Makes optional fields nullable, handles union types |
google_gemini |
Filters invalid required fields, fixes anyOf variants, removes unsupported features |
anthropic |
Accepts schemas as-is, but handles them efficiently |
For other providers, try without specifying the option:
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers
);
The returned cleanup
function properly handles resource cleanup:
const { tools, cleanup } = await convertMcpToLangchainTools(mcpServers);
try {
// Use tools with your LLM
} finally {
// Always cleanup, even if errors occur
await cleanup();
}
The library provides configurable logging to help debug connection and tool execution issues:
// Configure log level
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers,
{ logLevel: "debug" }
);
// Use custom logger
class MyLogger implements McpToolsLogger {
debug(...args: unknown[]) { console.log("[DEBUG]", ...args); }
info(...args: unknown[]) { console.log("[INFO]", ...args); }
warn(...args: unknown[]) { console.warn("[WARN]", ...args); }
error(...args: unknown[]) { console.error("[ERROR]", ...args); }
}
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers,
{ logger: new MyLogger() }
);
Available log levels: "fatal" | "error" | "warn" | "info" | "debug" | "trace"
See README_DEV.md for more information about development and testing.