yisu201506 mcp server

yisu201506 mcp server avatar

by yisu201506

Repository for MCP server implementation

What is yisu201506 mcp server

MCP Server Implementation

A complete Flask-based implementation of Model Context Protocol (MCP) for enhancing Large Language Model capabilities with external tools.

Overview

This repository demonstrates how to build a server that handles Model Context Protocol (MCP), a method for extending LLM capabilities through tool invocation directly in the model's text output. Unlike function calling, MCP places tool definitions directly in the context window and parses the model's natural language responses to identify tool usage.

Features

  • 🔧 Complete MCP Implementation: Full parsing, execution, and response handling
  • 🌤️ Sample Tools: Weather and calculator tools with parameter validation
  • 🔄 Conversation Flow: Maintains context across multiple interactions
  • 🧩 Regex-Based Parsing: Flexible text parsing for tool invocations
  • 🚀 Flask API: REST API endpoints for chat integration

Project Structure

mcp_server/
├── app.py                  # Main Flask application
├── mcp_handler.py          # MCP parsing and execution
├── mcp_example.py          # Standalone MCP example
├── requirements.txt        # Dependencies
├── tools/                  # Tool implementations
│   ├── __init__.py
│   ├── weather.py          # Weather API tool
│   └── calculator.py       # Calculator tool
└── README.md               # This file

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/mcp-server.git
    cd mcp-server
    
  2. Create a virtual environment:

    python -m venv venv
    source venv`/bin/activate`  # On Windows: venv\Scripts\activate
    
  3. Install dependencies:

    pip install -r requirements.txt
    
  4. Set up environment variables:

    # Create a .env file with:
    LLM_API_KEY=your_llm_api_key_here
    WEATHER_API_KEY=your_weather_api_key_here
    FLASK_APP=app.py
    FLASK_ENV=development
    

Usage

Running the Server

Start the Flask development server:

flask run

For production:

gunicorn app:app

API Endpoints

  • POST /chat: Process chat messages with MCP
    curl -X POST http://localhost:5000/chat \
      -H "Content-Type: application/json" \
      -d '{
        "messages": [
          {
            "role": "user",
            "content": "What's the weather like in Boston?"
          }
        ]
      }'
    

Standalone Example

Run the example script to see MCP in action:

python mcp_example.py

How It Works

  1. Tool Registration: Tools are registered with their parameters and execution logic
  2. Tool Definition Injection: XML-formatted tool descriptions are added to the prompt
  3. LLM Response Processing: Regex patterns identify tool calls in the LLM's text output
  4. Tool Execution: Parameters are parsed and passed to appropriate tool handlers
  5. Result Injection: Tool execution results are inserted back into the response

MCP vs. Function Calling

Feature MCP Function Calling
Definition Location In prompt text In API parameters
Invocation Format Natural language Structured JSON
Implementation Text parsing API integration
Visibility Visible in response May be hidden
Platform Support Any text-based LLM Requires API support

Example Conversation

User: What's the weather like in Boston?

LLM:

I'll check the weather for you.

get_weather(location="Boston, MA", unit="fahrenheit")

After Processing:

I'll check the weather for you.

get_weather(location="Boston, MA", unit="fahrenheit")

Result from get_weather:
{
  "location": "Boston, MA",
  "temperature": 72,
  "unit": "fahrenheit",
  "conditions": "Partly Cloudy",
  "humidity": 68,
  "wind_speed": 5.8
}

Adding Your Own Tools

  1. Create a new class inheriting from Tool
  2. Define parameters and execution logic
  3. Register with the MCP handler

Example:

class MyTool(Tool):
    def __init__(self):
        parameters = [
            {
                "name": "param1",
                "type": "string",
                "description": "Description of param1",
                "required": True
            }
        ]
        
        super().__init__(
            name="my_tool",
            description="Description of my tool",
            parameters=parameters
        )
    
    def execute(self, param1):
        # Tool logic here
        return {"result": "Processed " + param1}

MCP Configuration and Invocation Flow

  1. Tool Registration:

    • MCP tools are registered with the handler
    • Each tool provides its name, description, and parameter definitions
  2. Tool Definition Injection:

    • Tool definitions are added to the system message
    • Format follows XML structure for MCP
  3. LLM Response Processing:

    • LLM generates responses that may include tool invocations
    • Pattern matching identifies tool calls in the text
    • Tool parameters are parsed and passed to tool execution methods
  4. Tool Execution:

    • Tools are executed with the provided parameters
    • Results are injected back into the conversation
  5. Conversation Management:

    • Processed responses with tool results are added to conversation history
    • Future LLM requests include this history for context

Example Conversation

Here's an example of what a conversation might look like:

User: What's the weather like in Boston?

System: Sends prompt with MCP tool definitions to LLM

LLM Response:

I'll check the weather for you.

get_weather(location="Boston, MA", unit="fahrenheit")

MCP Handler: Parses the response, finds the tool call, and executes the weather tool

Tool Execution Result:

Result from get_weather:
{
  "location": "Boston, MA",
  "temperature": 72,
  "unit": "fahrenheit",
  "conditions": "Partly Cloudy",
  "humidity": 68,
  "wind_speed": 5.8
}

Processed Response (sent back to user):

I'll check the weather for you.

get_weather(location="Boston, MA", unit="fahrenheit")

Result from get_weather:
{
  "location": "Boston, MA",
  "temperature": 72,
  "unit": "fahrenheit",
  "conditions": "Partly Cloudy",
  "humidity": 68,
  "wind_speed": 5.8
}

User: Can you calculate the square root of 144?

LLM Response:

I can calculate that for you.

calculator(expression="sqrt(144)")

MCP Handler: Parses response, executes calculator tool

Tool Execution Result:

Result from calculator:
{
  "expression": "sqrt(144)",
  "result": 12.0
}

Processed Response (sent back to user):

I can calculate that for you.

calculator(expression="sqrt(144)")

Result from calculator:
{
  "expression": "sqrt(144)",
  "result": 12.0
}

The square root of 144 is 12.

This demonstrates the complete flow of MCP tool usage, from the LLM's text-based invocation through execution and response processing.

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Leave a Comment

Frequently Asked Questions

What is MCP?

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.

What are MCP Servers?

MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.

How do MCP Servers work?

MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.

Are MCP Servers secure?

Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.

Related MCP Servers

chrisdoc hevy mcp avatar

chrisdoc hevy mcp

mcp
sylphlab pdf reader mcp avatar

sylphlab pdf reader mcp

An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.

pdf-parsetypescriptnodejs
aashari mcp server atlassian bitbucket avatar

aashari mcp server atlassian bitbucket

Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.

atlassianrepositorymcp
aashari mcp server atlassian confluence avatar

aashari mcp server atlassian confluence

Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.

atlassianmcpconfluence
prisma prisma avatar

prisma prisma

Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB

cockroachdbgomcp
Zzzccs123 mcp sentry avatar

Zzzccs123 mcp sentry

mcp sentry for typescript sdk

mcptypescript
zhuzhoulin dify mcp server avatar

zhuzhoulin dify mcp server

mcp
zhongmingyuan mcp my mac avatar

zhongmingyuan mcp my mac

mcp
zhixiaoqiang desktop image manager mcp avatar

zhixiaoqiang desktop image manager mcp

MCP 服务器,用于管理桌面图片、查看详情、压缩、移动等(完全让Trae实现)

mcp
zhixiaoqiang antd components mcp avatar

zhixiaoqiang antd components mcp

An MCP service for Ant Design components query | 一个减少 Ant Design 组件代码生成幻觉的 MCP 服务,包含系统提示词、组件文档、API 文档、代码示例和更新日志查询

designantdapi

Submit Your MCP Server

Share your MCP server with the community

Submit Now