基于 FastAPI 和 MCP(模型上下文协议),实现 AI 模型与开发环境 之间的标准化上下文交互,提升 AI 应用的可扩展性和可维护性。
What is freedanfan mcp server
MCP Server
中文文档
Project Overview
Built on FastAPI and MCP (Model Context Protocol), this project enables standardized context interaction between AI models and development environments. It enhances the scalability and maintainability of AI applications by simplifying model deployment, providing efficient API endpoints, and ensuring consistency in model input and output, making it easier for developers to integrate and manage AI tasks.
MCP (Model Context Protocol) is a unified protocol for context interaction between AI models and development environments. This project provides a Python-based MCP server implementation that supports basic MCP protocol features, including initialization, sampling, and session management.
Features
- JSON-RPC 2.0: Request-response communication based on standard JSON-RPC 2.0 protocol
- SSE Connection: Support for Server-Sent Events connections for real-time notifications
- Modular Design: Modular architecture for easy extension and customization
- Asynchronous Processing: High-performance service using FastAPI and asynchronous IO
- Complete Client: Includes a full test client implementation
Project Structure
mcp_server/
├── mcp_server.py # MCP server main program
├── mcp_client.py # MCP client test program
├── routers/
│ ├── __init__.py # Router package initialization
│ └── base_router.py # Base router implementation
├── requirements.txt # Project dependencies
└── README.md # Project documentation
Installation
- Clone the repository:
git clone https://github.com/freedanfan/mcp_server.git
cd mcp_server
- Install dependencies:
pip install -r requirements.txt
Usage
Starting the Server
python mcp_server.py
By default, the server will start on 127.0.0.1:12000
. You can customize the host and port using environment variables:
export MCP_SERVER_HOST=0.0.0.0
export MCP_SERVER_PORT=8000
python mcp_server.py
Running the Client
Run the client in another terminal:
python mcp_client.py
If the server is not running at the default address, you can set an environment variable:
export MCP_SERVER_URL="http://your-server-address:port"
python mcp_client.py
API Endpoints
The server provides the following API endpoints:
- Root Path (
/
): Provides server information - API Endpoint (
/api
): Handles JSON-RPC requests - SSE Endpoint (
/sse
): Handles SSE connections
MCP Protocol Implementation
Initialization Flow
- Client connects to the server via SSE
- Server returns the API endpoint URI
- Client sends an initialization request with protocol version and capabilities
- Server responds to the initialization request, returning server capabilities
Sampling Request
Clients can send sampling requests with prompts:
{
"jsonrpc": "2.0",
"id": "request-id",
"method": "sample",
"params": {
"prompt": "Hello, please introduce yourself."
}
}
The server will return sampling results:
{
"jsonrpc": "2.0",
"id": "request-id",
"result": {
"content": "This is a response to the prompt...",
"usage": {
"prompt_tokens": 10,
"completion_tokens": 50,
"total_tokens": 60
}
}
}
Closing a Session
Clients can send a shutdown request:
{
"jsonrpc": "2.0",
"id": "request-id",
"method": "shutdown",
"params": {}
}
The server will gracefully shut down:
{
"jsonrpc": "2.0",
"id": "request-id",
"result": {
"status": "shutting_down"
}
}
Development Extensions
Adding New Methods
To add new MCP methods, add a handler function to the MCPServer
class and register it in the _register_methods
method:
def handle_new_method(self, params: dict) -> dict:
"""Handle new method"""
logger.info(f"Received new method request: {params}")
# Processing logic
return {"result": "success"}
def _register_methods(self):
# Register existing methods
self.router.register_method("initialize", self.handle_initialize)
self.router.register_method("sample", self.handle_sample)
self.router.register_method("shutdown", self.handle_shutdown)
# Register new method
self.router.register_method("new_method", self.handle_new_method)
Integrating AI Models
To integrate actual AI models, modify the handle_sample
method:
async def handle_sample(self, params: dict) -> dict:
"""Handle sampling request"""
logger.info(f"Received sampling request: {params}")
# Get prompt
prompt = params.get("prompt", "")
# Call AI model API
# For example: using OpenAI API
response = await openai.ChatCompletion.acreate(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
content = response.choices[0].message.content
usage = response.usage
return {
"content": content,
"usage": {
"prompt_tokens": usage.prompt_tokens,
"completion_tokens": usage.completion_tokens,
"total_tokens": usage.total_tokens
}
}
Troubleshooting
Common Issues
- Connection Errors: Ensure the server is running and the client is using the correct server URL
- 405 Method Not Allowed: Ensure the client is sending requests to the correct API endpoint
- SSE Connection Failure: Check network connections and firewall settings
Logging
Both server and client provide detailed logging. View logs for more information:
# Increase log level
export PYTHONPATH=.
python -m logging -v DEBUG -m mcp_server
References
- MCP Protocol Specification
- FastAPI Documentation
- JSON-RPC 2.0 Specification
- SSE Specification
License
This project is licensed under the MIT License. See the LICENSE file for details.
Leave a Comment
Frequently Asked Questions
What is MCP?
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.
What are MCP Servers?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.
How do MCP Servers work?
MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.
Are MCP Servers secure?
Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.
Related MCP Servers
Brave Search MCP
Integrate Brave Search capabilities into Claude through MCP. Enables real-time web searches with privacy-focused results and comprehensive web coverage.
chrisdoc hevy mcp
sylphlab pdf reader mcp
An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.
aashari mcp server atlassian bitbucket
Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.
aashari mcp server atlassian confluence
Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.
prisma prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB
Zzzccs123 mcp sentry
mcp sentry for typescript sdk
zhuzhoulin dify mcp server
zhongmingyuan mcp my mac
zhixiaoqiang desktop image manager mcp
MCP 服务器,用于管理桌面图片、查看详情、压缩、移动等(完全让Trae实现)
Submit Your MCP Server
Share your MCP server with the community
Submit Now