Use Figma in a MCP with Chunking Support
What is ArchimedesCrypto figma mcp chunked
Figma MCP Server with Chunking
*
A Model Context Protocol (MCP) server for interacting with the Figma API, featuring memory-efficient chunking and pagination capabilities for handling large Figma files.
Overview
This MCP server provides a robust interface to the Figma API with built-in memory management features. It's designed to handle large Figma files efficiently by breaking down operations into manageable chunks and implementing pagination where necessary.
Key Features
- Memory-aware processing with configurable limits
- Chunked data retrieval for large files
- Pagination support for all listing operations
- Node type filtering
- Progress tracking
- Configurable chunk sizes
- Resume capability for interrupted operations
- Debug logging
- Config file support
Installation
Installing via Smithery
To install Figma MCP Server with Chunking for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @ArchimedesCrypto/figma-mcp-chunked --client claude
Manual Installation
# Clone the repository
git clone [repository-url]
cd figma-mcp-chunked
# Install dependencies
npm install
# Build the project
npm run build
Configuration
Environment Variables
FIGMA_ACCESS_TOKEN
: Your Figma API access token
Config File
You can provide configuration via a JSON file using the --config
flag:
{
"mcpServers": {
"figma": {
"env": {
"FIGMA_ACCESS_TOKEN": "your-access-token"
}
}
}
}
Usage:
node build/index.js --config=path/to/config.json
Tools
get_file_data (New)
Retrieves Figma file data with memory-efficient chunking and pagination.
{
"name": "get_file_data",
"arguments": {
"fileKey": "your-file-key",
"accessToken": "your-access-token",
"pageSize": 100, // Optional: nodes per chunk
"maxMemoryMB": 512, // Optional: memory limit
"nodeTypes": ["FRAME", "COMPONENT"], // Optional: filter by type
"cursor": "next-page-token", // Optional: resume from last position
"depth": 2 // Optional: traversal depth
}
}
Response:
{
"nodes": [...],
"memoryUsage": 256.5,
"nextCursor": "next-page-token",
"hasMore": true
}
list_files
Lists files with pagination support.
{
"name": "list_files",
"arguments": {
"project_id": "optional-project-id",
"team_id": "optional-team-id"
}
}
get_file_versions
Retrieves version history in chunks.
{
"name": "get_file_versions",
"arguments": {
"file_key": "your-file-key"
}
}
get_file_comments
Retrieves comments with pagination.
{
"name": "get_file_comments",
"arguments": {
"file_key": "your-file-key"
}
}
get_file_info
Retrieves file information with chunked node traversal.
{
"name": "get_file_info",
"arguments": {
"file_key": "your-file-key",
"depth": 2, // Optional: traversal depth
"node_id": "specific-node-id" // Optional: start from specific node
}
}
get_components
Retrieves components with chunking support.
{
"name": "get_components",
"arguments": {
"file_key": "your-file-key"
}
}
get_styles
Retrieves styles with chunking support.
{
"name": "get_styles",
"arguments": {
"file_key": "your-file-key"
}
}
get_file_nodes
Retrieves specific nodes with chunking support.
{
"name": "get_file_nodes",
"arguments": {
"file_key": "your-file-key",
"ids": ["node-id-1", "node-id-2"]
}
}
Memory Management
The server implements several strategies to manage memory efficiently:
Chunking Strategy
- Configurable chunk sizes via
pageSize
- Memory usage monitoring
- Automatic chunk size adjustment based on memory pressure
- Progress tracking per chunk
- Resume capability using cursors
Best Practices
- Start with smaller chunk sizes (50-100 nodes) and adjust based on performance
- Monitor memory usage through the response metadata
- Use node type filtering when possible to reduce data load
- Implement pagination for large datasets
- Use the resume capability for very large files
Configuration Options
pageSize
: Number of nodes per chunk (default: 100)maxMemoryMB
: Maximum memory usage in MB (default: 512)nodeTypes
: Filter specific node typesdepth
: Control traversal depth for nested structures
Debug Logging
The server includes comprehensive debug logging:
// Debug log examples
[MCP Debug] Loading config from config.json
[MCP Debug] Access token found xxxxxxxx...
[MCP Debug] Request { tool: 'get_file_data', arguments: {...} }
[MCP Debug] Response size 2.5 MB
Error Handling
The server provides detailed error messages and suggestions:
// Memory limit error
"Response size too large. Try using a smaller depth value or specifying a node_id.""
// Invalid parameters
"Missing required parameters: fileKey and accessToken"
// API errors
"Figma API error: [detailed message]"
Troubleshooting
Common Issues
-
Memory Errors
- Reduce chunk size
- Use node type filtering
- Implement pagination
- Specify smaller depth values
-
Performance Issues
- Monitor memory usage
- Adjust chunk sizes
- Use appropriate node type filters
- Implement caching for frequently accessed data
-
API Limits
- Implement rate limiting
- Use pagination
- Cache responses when possible
Debug Mode
Enable debug logging for detailed information:
# Set debug environment variable
export DEBUG=true
Contributing
Contributions are welcome! Please read our contributing guidelines and submit pull requests to our repository.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Leave a Comment
Frequently Asked Questions
What is MCP?
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.
What are MCP Servers?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.
How do MCP Servers work?
MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.
Are MCP Servers secure?
Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.
Related MCP Servers
chrisdoc hevy mcp
sylphlab pdf reader mcp
An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.
aashari mcp server atlassian bitbucket
Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.
aashari mcp server atlassian confluence
Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.
prisma prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB
Zzzccs123 mcp sentry
mcp sentry for typescript sdk
zhuzhoulin dify mcp server
zhongmingyuan mcp my mac
zhixiaoqiang desktop image manager mcp
MCP 服务器,用于管理桌面图片、查看详情、压缩、移动等(完全让Trae实现)
zhixiaoqiang antd components mcp
An MCP service for Ant Design components query | 一个减少 Ant Design 组件代码生成幻觉的 MCP 服务,包含系统提示词、组件文档、API 文档、代码示例和更新日志查询
Submit Your MCP Server
Share your MCP server with the community
Submit Now