Fetch
by modelcontextprotocol
Web content fetching and conversion for efficient LLM usage
What is Fetch
Fetch MCP Server
A Model Context Protocol server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
The fetch tool will truncate the response, but by using the start_index
argument, you can specify where to start the content extraction. This lets models read a webpage in chunks, until they find the information they need.
Available Tools
fetch
- Fetches a URL from the internet and extracts its contents as markdown.url
(string, required): URL to fetchmax_length
(integer, optional): Maximum number of characters to return (default: 5000)start_index
(integer, optional): Start content from this character index (default: 0)raw
(boolean, optional): Get raw content without markdown conversion (default: false)
Prompts
- fetch
- Fetch a URL and extract its contents as markdown
- Arguments:
url
(string, required): URL to fetch
Installation
Optionally: Install node.js, this will cause the fetch server to use a different HTML simplifier that is more robust.
Using uv (recommended)
When using uv
no specific installation is needed. We will
use uvx
to directly run mcp-server-fetch.
Using PIP
Alternatively you can install mcp-server-fetch
via pip:
pip install mcp-server-fetch
After installation, you can run it as a script using:
python -m mcp_server_fetch
Configuration
Configure for Claude.app
Add to your Claude settings:
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
"mcpServers": {
"fetch": {
"command": "docker",
"args": ["run", "-i", "--rm", "mcp/fetch"]
}
}
"mcpServers": {
"fetch": {
"command": "python",
"args": ["-m", "mcp_server_fetch"]
}
}
Customization - robots.txt
By default, the server will obey a websites robots.txt file if the request came from the model (via a tool), but not if
the request was user initiated (via a prompt). This can be disabled by adding the argument --ignore-robots-txt
to the
args
list in the configuration.
Customization - User-agent
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the server will use either the user-agent
ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)
or
ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers)
This can be customized by adding the argument --user-agent=YourUserAgent
to the args
list in the configuration.
Customization - Proxy
The server can be configured to use a proxy by using the --proxy-url
argument.
Debugging
You can use the MCP inspector to debug the server. For uvx installations:
npx @modelcontextprotocol/inspector uvx mcp-server-fetch
Or if you've installed the package in a specific directory or are developing on it:
cd path/to/servers/src/fetch
npx @modelcontextprotocol/inspector uv run mcp-server-fetch
Contributing
We encourage contributions to help expand and improve mcp-server-fetch. Whether you want to add new tools, enhance existing functionality, or improve documentation, your input is valuable.
For examples of other MCP servers and implementation patterns, see: https://github.com/modelcontextprotocol/servers
Pull requests are welcome! Feel free to contribute new ideas, bug fixes, or enhancements to make mcp-server-fetch even more powerful and useful.
License
mcp-server-fetch is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
Leave a Comment
Comments section will be available soon. Stay tuned!
Frequently Asked Questions
What is MCP?
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.
What are MCP Servers?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.
How do MCP Servers work?
MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.
Are MCP Servers secure?
Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.
Related MCP Servers
Playwright
A Model Context Protocol (MCP) server that provides browser automation capabilities using Playwright. This server enables LLMs to interact with web pages through structured accessibility snapshots, bypassing the need for screenshots or visually-tuned models.
puppeteer-mcp-server
Puppeteer
Browser automation and web scraping
Playwright Universal MCP
A universal Playwright MCP server for browser automation in containerized environments
MCP Server Playwright
MCP Server Playwright - A browser automation service for Claude Desktop
OneNote MCP Server
MCP server for browsing and interacting with OneNote web app using browser-use automation
Playwright
๐ Fetcher MCP - Playwright Headless Browser Server
MCP server for fetch web page content using Playwright headless browser.
MCP Browser
NeoForge Browser MCP server - used to test the frontend
Scrappey MCP Server
Allow LLMs to control a browser with Scrappey
Submit Your MCP Server
Share your MCP server with the community
Submit Now