Discover and use data skills to extend Claude's capabilities
665 Data Skills Available
This skill should be used when the user asks about "data table", "tanstack table", "sortable table", "filterable table", "table pagination", "column sorting", "row selection", or mentions building data tables, grids, or TanStack Table integration with shadcn.
Run data quality checks and validation rules in DHIS2. Use for validation rules, violations, or data quality analysis. Routed via dhis2 skill for general DHIS2 requests.
Get DHIS2 system information including version, database details, and server configuration. Use for version checks or server capabilities. Routed via dhis2 skill for general DHIS2 requests.
Extract data elements and data element groups from DHIS2. Use for data element metadata, groups, or category option combos. Routed via dhis2 skill for general DHIS2 requests.
Comprehensive descriptive analysis of tabular datasets with grouped statistics, entity extraction, and publication-ready summaries. Use when analyzing Excel/CSV datasets requiring: (1) Overall and grouped descriptive statistics, (2) Frequency distributions by categorical variables, (3) Entity extraction from text fields, (4) Temporal dynamics analysis, (5) CSV output tables with MD summary reports. Supports SDT scores, engagement metrics, or any continuous/categorical variables.
Extract organisation units and groups from DHIS2. Use for org unit hierarchy, metadata, levels, or groups. Routed via dhis2 skill for general DHIS2 requests.
Extract individual-level/case-based data from DHIS2 Tracker. Use for tracked entities, enrollments, events, or relationships. Routed via dhis2 skill for general DHIS2 requests.
Extract datasets metadata from DHIS2. Use for dataset definitions, data entry forms, or reporting frequencies. Routed via dhis2 skill for general DHIS2 requests.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas | Sử dụng khi: tạo Excel, bảng tính, công thức, phân t
Query aggregated analytics data from DHIS2. Use for calculated/aggregated values, indicator values, or cross-dimensional analysis. Routed via dhis2 skill for general DHIS2 requests.
Grey Relational Analysis for MCM/ICM competitions. Use for small-sample correlation analysis and factor importance ranking. Does not require large samples, normal distribution, or linear relationships. Ideal for analyzing which factors most influence outcomes when data is limited.
Guide for creating map visualizations in dashboards. Covers geographic data detection, shape-based maps (polygons/GeoJSON), point-based maps (lat/lon coordinates), and world map backgrounds. Referenced by dashboard-builder skill. Do not use directly - use dashboard-builder instead.
Profile and explore datasets to understand their shape, quality, and patterns before analysis. Use when encountering a new dataset, assessing data quality, discovering column distributions, identifying nulls and outliers, or deciding which dimensions to analyze.
Diagnose and fix data quality problems in datasets. Use when working with dirty data, finding duplicates, handling missing values, detecting outliers/anomalies, validating constraints (functional dependencies, referential integrity), profiling datasets, or cleaning data for analysis or ML. Covers the full data quality lifecycle - define, detect, clean, measure.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
MCM/ICM Excel Data Processing & Analysis Expert. Use when handling raw Excel/CSV data for mathematical modeling competitions. Specializes in: (1) Data cleaning (missing values, outliers), (2) Normalization for evaluation models (TOPSIS, AHP, EWM), (3) Statistical analysis and correlation, (4) Feature engineering for modeling. Optimized for MCM/ICM workflow with time-sensitive data preparation.
Performs statistical analysis, finds patterns, and generates insights
Synthesize qualitative and quantitative user research into structured insights and opportunity areas. Use when analyzing interview notes, survey responses, support tickets, or behavioral data to identify themes, build personas, or prioritize opportunities.
Query Pittsburgh's Western PA Regional Data Center (WPRDC) — 363+ datasets covering property assessments, air quality, 311 requests, jail census, overdose data, permits, violations, and more. Run SQL queries against live tables without downloading. Use when user asks about Pittsburgh/Allegheny County public data, property lookups, civic data, or regional statistics.
Analyze CSV files, generate summary statistics, and create visualizations using Python and pandas. Use when the user uploads, attaches, or references a CSV file, asks to summarize or analyze tabular data, requests insights from CSV data, or wants to understand data structure and quality.
Queries data warehouse and answers business questions about data. Handles questions requiring database/warehouse queries including "who uses X", "how many Y", "show me Z", "find customers", "what is the count", data lookups, metrics, trends, or SQL analysis.
Connect Spice to data sources like PostgreSQL, MySQL, S3, Databricks, Snowflake, and more. Use when asked to "add a dataset", "connect to a database", "load data from S3", or "configure a data source".
Writes Power Query (M language) for data transformation, connections, and ETL. Use for data sources, transformations, parameters, and query optimization.
Build Oxy workflows, SQL queries, and agents following best practices. Use when the user asks to create data pipelines, queries, or analysis agents. Enforces hierarchy - semantic queries first, then SQL/workflows, then agents.