Discover Data
by rand
Automatically discover data pipeline and ETL skills when working with ETL. Activates for data development tasks.
Skill Details
Repository Files
1 file in this skill directory
name: discover-data description: Automatically discover data pipeline and ETL skills when working with ETL. Activates for data development tasks.
Data Skills Discovery
Provides automatic access to comprehensive data skills.
When This Skill Activates
This skill auto-activates when you're working with:
- ETL
- data pipelines
- batch processing
- stream processing
- data validation
- orchestration
- Airflow
- timely dataflow
- differential dataflow
- streaming aggregations
- windowing
- real-time analytics
Available Skills
Quick Reference
The Data category contains 9 skills:
- batch-processing - Orchestrating complex data pipelines with dependencies
- data-validation - Validating data schema before processing
- dataflow-coordination - Coordination patterns for distributed dataflow systems
- differential-dataflow - Differential computation for incremental updates and efficient joins
- etl-patterns - Designing data extraction from multiple sources
- pipeline-orchestration - Coordinating complex multi-step data workflows
- stream-processing - Processing real-time event streams (Kafka, Flink)
- streaming-aggregations - Windowing, sessionization, time-series aggregation
- timely-dataflow - Low-latency streaming computation with progress tracking
Load Full Category Details
For complete descriptions and workflows:
cat ~/.claude/skills/data/INDEX.md
This loads the full Data category index with:
- Detailed skill descriptions
- Usage triggers for each skill
- Common workflow combinations
- Cross-references to related skills
Load Specific Skills
Load individual skills as needed:
# Traditional ETL/Batch
cat ~/.claude/skills/data/batch-processing.md
cat ~/.claude/skills/data/data-validation.md
cat ~/.claude/skills/data/etl-patterns.md
cat ~/.claude/skills/data/pipeline-orchestration.md
# Stream Processing
cat ~/.claude/skills/data/stream-processing.md
cat ~/.claude/skills/data/streaming-aggregations.md
# Advanced Dataflow Systems
cat ~/.claude/skills/data/timely-dataflow.md
cat ~/.claude/skills/data/differential-dataflow.md
cat ~/.claude/skills/data/dataflow-coordination.md
Common Workflow Combinations
Real-Time Analytics Pipeline
# Load these skills together:
cat ~/.claude/skills/data/stream-processing.md # Kafka setup
cat ~/.claude/skills/data/streaming-aggregations.md # Windowing patterns
cat ~/.claude/skills/data/dataflow-coordination.md # Coordination
Incremental Computation System
# Load these skills together:
cat ~/.claude/skills/data/timely-dataflow.md # Foundation
cat ~/.claude/skills/data/differential-dataflow.md # Incremental updates
cat ~/.claude/skills/data/dataflow-coordination.md # Distributed coordination
Hybrid Batch + Stream
# Load these skills together:
cat ~/.claude/skills/data/batch-processing.md # Batch jobs
cat ~/.claude/skills/data/stream-processing.md # Stream processing
cat ~/.claude/skills/data/pipeline-orchestration.md # Overall coordination
Progressive Loading
This gateway skill enables progressive loading:
- Level 1: Gateway loads automatically (you're here now)
- Level 2: Load category INDEX.md for full overview
- Level 3: Load specific skills as needed
Usage Instructions
- Auto-activation: This skill loads automatically when Claude Code detects data work
- Browse skills: Run
cat ~/.claude/skills/data/INDEX.mdfor full category overview - Load specific skills: Use bash commands above to load individual skills
Next Steps: Run cat ~/.claude/skills/data/INDEX.md to see full category details.
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
