Databases
by pluginagentmarketplace
Master relational and NoSQL databases. Learn PostgreSQL, MySQL, MongoDB, Redis, and other technologies for data persistence, optimization, and scaling.
Skill Details
Repository Files
10 files in this skill directory
name: databases description: Master relational and NoSQL databases. Learn PostgreSQL, MySQL, MongoDB, Redis, and other technologies for data persistence, optimization, and scaling. sasmp_version: "2.0.0" bonded_agent: 02-database-management bond_type: PRIMARY_BOND
=== PRODUCTION-GRADE SKILL CONFIG (SASMP v2.0.0) ===
atomic_operations:
- DATABASE_SELECTION
- SCHEMA_DESIGN
- QUERY_OPTIMIZATION
- BACKUP_CONFIGURATION
parameter_validation: query: type: string required: true minLength: 5 maxLength: 2000 database_type: type: string enum: [postgresql, mysql, mongodb, redis, cassandra, elasticsearch] required: false operation: type: string enum: [design, optimize, backup, migrate] required: false
retry_logic: max_attempts: 3 backoff: exponential initial_delay_ms: 2000
logging_hooks: on_invoke: "skill.databases.invoked" on_success: "skill.databases.completed" on_error: "skill.databases.failed"
exit_codes: SUCCESS: 0 INVALID_INPUT: 1 CONNECTION_ERROR: 2 QUERY_ERROR: 3 OPTIMIZATION_FAILED: 4
Database Management Skill
Bonded to: database-management-agent
Quick Start
# Invoke databases skill
"Design a database schema for my e-commerce application"
"Optimize slow queries in PostgreSQL"
"Set up Redis caching for session storage"
Instructions
- Analyze Requirements: Understand data patterns, volume, access needs
- Select Database: Choose SQL vs NoSQL based on requirements
- Design Schema: Create data models, relationships, constraints
- Optimize Queries: Implement indexes, analyze execution plans
- Set Up Operations: Configure backup, replication, monitoring
Database Selection Guide
| Type | Best For | ACID | Scale | Examples |
|---|---|---|---|---|
| Relational | Complex queries, transactions | Full | Vertical | PostgreSQL, MySQL |
| Document | Flexible schema, JSON | Partial | Horizontal | MongoDB |
| Key-Value | Caching, sessions | No | Horizontal | Redis |
| Wide-Column | Time series, analytics | Partial | Horizontal | Cassandra |
| Graph | Relationships | Varies | Varies | Neo4j |
| Search | Full-text search | No | Horizontal | Elasticsearch |
Decision Tree
Need ACID transactions?
│
├─→ Yes → Complex queries?
│ ├─→ Yes → PostgreSQL
│ └─→ No → MySQL
│
└─→ No → Data type?
├─→ Documents/JSON → MongoDB
├─→ Key-Value pairs → Redis
├─→ Time series → Cassandra/TimescaleDB
└─→ Full-text search → Elasticsearch
Examples
Example 1: Schema Design
-- E-commerce schema
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
price DECIMAL(10,2) NOT NULL CHECK (price > 0),
stock INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
total DECIMAL(10,2) NOT NULL,
status VARCHAR(50) DEFAULT 'pending',
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Indexes for common queries
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_orders_created_at ON orders(created_at DESC);
Example 2: Query Optimization
-- Before: Full table scan
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 'abc123';
-- After: Add index
CREATE INDEX idx_orders_user_id ON orders(user_id);
-- Optimized query with selected columns
SELECT id, total, status, created_at
FROM orders
WHERE user_id = 'abc123'
ORDER BY created_at DESC
LIMIT 10;
Example 3: Redis Caching
import redis
import json
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
def get_user(user_id: str) -> dict:
# Try cache first
cached = r.get(f"user:{user_id}")
if cached:
return json.loads(cached)
# Cache miss - fetch from DB
user = db.query(User).get(user_id)
if user:
r.setex(f"user:{user_id}", 3600, json.dumps(user.dict()))
return user.dict()
Troubleshooting
Common Issues
| Issue | Cause | Solution |
|---|---|---|
| Query timeout | Missing index | Run EXPLAIN ANALYZE, add index |
| Connection refused | Wrong config | Check host, port, credentials |
| Deadlock | Concurrent updates | Use proper isolation, retry logic |
| OOM on query | Large result set | Add LIMIT, use cursors |
Debug Commands
-- PostgreSQL: Check slow queries
SELECT query, calls, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC LIMIT 10;
-- Check active connections
SELECT * FROM pg_stat_activity WHERE state = 'active';
-- Check table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
Test Template
# tests/test_database.py
import pytest
from sqlalchemy import create_engine
class TestDatabaseSchema:
@pytest.fixture
def engine(self):
return create_engine("postgresql://test:test@localhost/testdb")
def test_users_table_exists(self, engine):
result = engine.execute("SELECT 1 FROM users LIMIT 1")
assert result is not None
def test_foreign_key_constraint(self, engine):
with pytest.raises(IntegrityError):
engine.execute(
"INSERT INTO orders (user_id, total) VALUES ('invalid-uuid', 100)"
)
References
See references/ directory for:
DATABASE_GUIDE.md- Detailed database patterns- Schema templates and examples
Resources
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
