A mockup full stack app built with React, FastAPI, MongoDB, and Docker, powered by AWS Rekognition & CLIP for multi-tagging and clothing recommendations
What is attarmau FastMCP RecSys
FastMCP_RecSys
This is a CLIP-Based Fashion Recommender with MCP.
π Sample Components for UI
- Image upload
- Submit button
- Display clothing tags + recommendations
Mockup
A user uploads a clothing image β YOLO detects clothing β CLIP encodes β Recommend similar
Folder Structure
/project-root
β
βββ /backend
β βββ Dockerfile
β βββ /app
β βββ /aws
β β β βββ rekognition_wrapper.py # AWS Rekognition logic
β β βββ /utils
β β β βββ image_utils.py # Bounding box crop utils
β β βββ /controllers
β β β βββ clothing_detector.py # Coordinates Rekognition + cropping
β β βββ /tests
β β β βββ test_rekognition_wrapper.py
β β β βββ test_clothing_tagging.py
β β βββ server.py # FastAPI app code
β β βββ /routes
β β β βββ clothing_routes.py
β β βββ /controllers
β β β βββ clothing_controller.py
β β β βββ clothing_tagging.py
β β β βββ tag_extractor.py # Pending: define core CLIP functionality
β β βββ schemas/
β β β βββ clothing_schemas.py
β β βββ config/
β β β βββ tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
β β β βββ database.py
β β β βββ settings.py
β β β βββ api_keys.py
β β βββ requirements.txt
β βββ .env
β
βββ /frontend
β βββ Dockerfile
β βββ package.json
β βββ package-lock.json
β βββ /public
β β βββ index.html
β βββ /src
β β βββ /components
β β β βββ ImageUpload.jsx
β β β βββ DetectedTags.jsx
β β β βββ Recommendations.jsx
β β βββ /utils
β β β βββ api.js
β β βββ App.js # Main React component
β β βββ index.js
β β βββ index.css
β β βββ tailwind.config.js
β β βββ postcss.config.js
β βββ .env
βββ docker-compose.yml
βββ README.md
Quick Start Guide
Step 1: Clone the GitHub Project
Step 2: Set Up the Python Environment
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On Windows
Step 3: Install Dependencies
pip install -r requirements.txt
Step 4: Start the FastAPI Server (Backend)
uvicorn backend.app.server:app --reload
Once the server is running and the database is connected, you should see the following message in the console:
Database connected
INFO: Application startup complete.
Step 5: Install Dependencies
Database connected INFO: Application startup complete.
npm install
Step 6: Start the Development Server (Frontend)
npm start
Once running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/
Whatβs completed so far:
- FastAPI server is up and running (24 Apr)
- Database connection is set up (24 Apr)
- Backend architecture is functional (24 Apr)
- Basic front-end UI for uploading picture (25 Apr)
5. Mock Testing for AWS Rekognition -> bounding box (15 May)
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py
- Tested Rekognition integration logic independently using a mock β verified it correctly extracts bounding boxes only when labels match the garment set
- Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
6. Mock Testing for AWS Rekognition -> CLIP (20 May)
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
-
Detecting garments using AWS Rekognition
-
Cropping the image around detected bounding boxes
-
Tagging the cropped image using CLIP
7. Mock Testing for full image tagging pipeline (Image bytes β AWS Rekognition (detect garments) β Crop images β CLIP (predict tags) + Error Handling
Negative Test Case | Description |
---|---|
No Detection Result | AWS doesn't detect any garments β should return an empty list. |
Image Not Clothing | CLIP returns vague or empty tags β verify fallback behavior. |
AWS Returns Exception | Simulate rekognition.detect_labels throwing an error β check try-except . |
Corrupted Image File | Simulate a broken (non-JPEG) image β verify it raises an error or gives a hint. |
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
- detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
- crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
- get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
Next Step:
- Evaluate CLIPβs tagging accuracy on sample clothing images
- Fine-tune the tagging system for better recommendations
- Test the backend integration with real-time user data
- Set up monitoring for model performance
- Front-end demo
Leave a Comment
Frequently Asked Questions
What is MCP?
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.
What are MCP Servers?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.
How do MCP Servers work?
MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.
Are MCP Servers secure?
Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.
Related MCP Servers
Brave Search MCP
Integrate Brave Search capabilities into Claude through MCP. Enables real-time web searches with privacy-focused results and comprehensive web coverage.
PostgreSQL MCP Server
A Model Context Protocol server that provides read-only access to PostgreSQL databases. This server enables LLMs to inspect database schemas and execute read-only queries.
AWS Knowledge Base Retrieval
An MCP server implementation for retrieving information from the AWS Knowledge Base using the Bedrock Agent Runtime.
Docker Control MCP
Manage Docker containers and images through Claude. Build, run, and monitor containers using natural language instructions.
chrisdoc hevy mcp
sylphlab pdf reader mcp
An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.
aashari mcp server atlassian bitbucket
Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.
aashari mcp server atlassian confluence
Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.
prisma prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB
Zzzccs123 mcp sentry
mcp sentry for typescript sdk
Submit Your MCP Server
Share your MCP server with the community
Submit Now