Expose llms-txt to IDEs for development
What is langchain ai mcpdoc
MCP LLMS-TXT Documentation Server
Overview
llms.txt is a website index for LLMs, providing background information, guidance, and links to detailed markdown files. IDEs like Cursor and Windsurf or apps like Claude Code/Desktop can use llms.txt
to retrieve context for tasks. However, these apps use different built-in tools to read and process files like llms.txt
. The retrieval process can be opaque, and there is not always a way to audit the tool calls or the context returned.
MCP offers a way for developers to have full control over tools used by these applications. Here, we create an open source MCP server to provide MCP host applications (e.g., Cursor, Windsurf, Claude Code/Desktop) with (1) a user-defined list of llms.txt
files and (2) a simple fetch_docs
tool read URLs within any of the provided llms.txt
files. This allows the user to audit each tool call as well as the context returned.
llms-txt
You can find llms.txt files for langgraph and langchain here:
Library | llms.txt |
---|---|
LangGraph Python | https://langchain-ai.github.io/langgraph/llms.txt |
LangGraph JS | https://langchain-ai.github.io/langgraphjs/llms.txt |
LangChain Python | https://python.langchain.com/llms.txt |
LangChain JS | https://js.langchain.com/llms.txt |
Quickstart
Install uv
- Please see official uv docs for other ways to install
uv
.
curl -LsSf https://astral.sh/uv/install.sh | sh
Choose an llms.txt
file to use.
- For example, here's the LangGraph
llms.txt
file.
Note: Security and Domain Access Control
For security reasons, mcpdoc implements strict domain access controls:
Remote llms.txt files: When you specify a remote llms.txt URL (e.g.,
https://langchain-ai.github.io/langgraph/llms.txt
), mcpdoc automatically adds only that specific domain (langchain-ai.github.io
) to the allowed domains list. This means the tool can only fetch documentation from URLs on that domain.Local llms.txt files: When using a local file, NO domains are automatically added to the allowed list. You MUST explicitly specify which domains to allow using the
--allowed-domains
parameter.Adding additional domains: To allow fetching from domains beyond those automatically included:
- Use
--allowed-domains domain1.com domain2.com
to add specific domains- Use
--allowed-domains '*'
to allow all domains (use with caution)This security measure prevents unauthorized access to domains not explicitly approved by the user, ensuring that documentation can only be retrieved from trusted sources.
(Optional) Test the MCP server locally with your llms.txt
file(s) of choice:
uvx --from mcpdoc mcpdoc \
--urls "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt" "LangChain:https://python.langchain.com/llms.txt" \
--transport sse \
--port 8082 \
--host localhost
- This should run at: http://localhost:8082
- Run MCP inspector and connect to the running server:
npx @modelcontextprotocol/inspector
- Here, you can test the
tool
calls.
Connect to Cursor
- Open
Cursor Settings
andMCP
tab. - This will open the
~/.cursor/mcp.json
file.
- Paste the following into the file (we use the
langgraph-docs-mcp
name and link to the LangGraphllms.txt
).
{
"mcpServers": {
"langgraph-docs-mcp": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt",
"--transport",
"stdio"
]
}
}
}
- Confirm that the server is running in your
Cursor Settings/MCP
tab. - Best practice is to then update Cursor Global (User) rules.
- Open Cursor
Settings/Rules
and updateUser Rules
with the following (or similar):
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
+ use this to answer the question
CMD+L
(on Mac) to open chat.- Ensure
agent
is selected.
Then, try an example prompt, such as:
what are types of memory in LangGraph?
Connect to Windsurf
- Open Cascade with
CMD+L
(on Mac). - Click
Configure MCP
to open the config file,~/.codeium/windsurf/mcp_config.json
. - Update with
langgraph-docs-mcp
as noted above.
- Update
Windsurf Rules/Global rules
with the following (or similar):
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
Then, try the example prompt:
- It will perform your tool calls.
Connect to Claude Desktop
- Open
Settings/Developer
to update~/Library/Application\ Support/Claude/claude_desktop_config.json
. - Update with
langgraph-docs-mcp
as noted above. - Restart Claude Desktop app.
[!Note] If you run into issues with Python version incompatibility when trying to add MCPDoc tools to Claude Desktop, you can explicitly specify the filepath to
python
executable in theuvx
command.{ "mcpServers": { "langgraph-docs-mcp": { "command": "uvx", "args": [ "--python", "/path/to/python", "--from", "mcpdoc", "mcpdoc", "--urls", "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt", "--transport", "stdio" ] } } }
[!Note] Currently (3/21/25) it appears that Claude Desktop does not support
rules
for global rules, so appending the following to your prompt.
<rules>
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
</rules>
- You will see your tools visible in the bottom right of your chat input.
Then, try the example prompt:
- It will ask to approve tool calls as it processes your request.
Connect to Claude Code
- In a terminal after installing Claude Code, run this command to add the MCP server to your project:
claude mcp add-json langgraph-docs '{"type":"stdio","command":"uvx" ,"args":["--from", "mcpdoc", "mcpdoc", "--urls", "langgraph:https://langchain-ai.github.io/langgraph/llms.txt", "--urls", "LangChain:https://python.langchain.com/llms.txt"]}' -s local
- You will see
~/.claude.json
updated. - Test by launching Claude Code and running to view your tools:
$ Claude
$ /mcp
[!Note] Currently (3/21/25) it appears that Claude Code does not support
rules
for global rules, so appending the following to your prompt.
<rules>
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
</rules>
Then, try the example prompt:
- It will ask to approve tool calls.
Command-line Interface
The mcpdoc
command provides a simple CLI for launching the documentation server.
You can specify documentation sources in three ways, and these can be combined:
- Using a YAML config file:
- This will load the LangGraph Python documentation from the
sample_config.yaml
file in this repo.
mcpdoc --yaml sample_config.yaml
- Using a JSON config file:
- This will load the LangGraph Python documentation from the
sample_config.json
file in this repo.
mcpdoc --json sample_config.json
- Directly specifying llms.txt URLs with optional names:
- URLs can be specified either as plain URLs or with optional names using the format
name:url
. - You can specify multiple URLs by using the
--urls
parameter multiple times. - This is how we loaded
llms.txt
for the MCP server above.
mcpdoc --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt --urls LangChain:https://python.langchain.com/llms.txt
You can also combine these methods to merge documentation sources:
mcpdoc --yaml sample_config.yaml --json sample_config.json --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt --urls LangChain:https://python.langchain.com/llms.txt
Additional Options
--follow-redirects
: Follow HTTP redirects (defaults to False)--timeout SECONDS
: HTTP request timeout in seconds (defaults to 10.0)
Example with additional options:
mcpdoc --yaml sample_config.yaml --follow-redirects --timeout 15
This will load the LangGraph Python documentation with a 15-second timeout and follow any HTTP redirects if necessary.
Configuration Format
Both YAML and JSON configuration files should contain a list of documentation sources.
Each source must include an llms_txt
URL and can optionally include a name
:
YAML Configuration Example (sample_config.yaml)
# Sample configuration for mcp-mcpdoc server
# Each entry must have a llms_txt URL and optionally a name
- name: LangGraph Python
llms_txt: https://langchain-ai.github.io/langgraph/llms.txt
JSON Configuration Example (sample_config.json)
[
{
"name": "LangGraph Python",
"llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt"
}
]
Programmatic Usage
from mcpdoc.main import create_server
# Create a server with documentation sources
server = create_server(
[
{
"name": "LangGraph Python",
"llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt",
},
# You can add multiple documentation sources
# {
# "name": "Another Documentation",
# "llms_txt": "https://example.com/llms.txt",
# },
],
follow_redirects=True,
timeout=15.0,
)
# Run the server
server.run(transport="stdio")
Leave a Comment
Frequently Asked Questions
What is MCP?
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.
What are MCP Servers?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.
How do MCP Servers work?
MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.
Are MCP Servers secure?
Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.
Related MCP Servers
chrisdoc hevy mcp
sylphlab pdf reader mcp
An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.
aashari mcp server atlassian bitbucket
Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.
aashari mcp server atlassian confluence
Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.
prisma prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB
Zzzccs123 mcp sentry
mcp sentry for typescript sdk
zhuzhoulin dify mcp server
zhongmingyuan mcp my mac
zhixiaoqiang desktop image manager mcp
MCP 服务器,用于管理桌面图片、查看详情、压缩、移动等(完全让Trae实现)
zhixiaoqiang antd components mcp
An MCP service for Ant Design components query | 一个减少 Ant Design 组件代码生成幻觉的 MCP 服务,包含系统提示词、组件文档、API 文档、代码示例和更新日志查询
Submit Your MCP Server
Share your MCP server with the community
Submit Now