Discover and use data skills to extend Claude's capabilities
665 Data Skills Available
Analyze data files using SQL queries with DataQL. Use when working with CSV, JSON, Parquet, Excel files or when the user mentions data analysis, filtering, aggregation, or SQL queries on files.
Analyzes relationships and connections between body systems, identifying root causes and hidden patterns in medical data.
데이터 분석/머신러닝 노트북의 결과를 분석하여 표준화된 Model Card 보고서(Markdown)를 자동 생성합니다.
Analyze datasets using pandas, generate reports, and create visualizations
Analyzes business metrics, KPIs, revenue, and growth patterns for BookInk. Use for cohort analysis, unit economics, investor reporting, and data-driven decisions. Triggers on "business metrics", "KPIs", "revenue analysis", "cohort".
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. Use when the agent needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Aggregate and merge data from multiple sources including App Store sales, GitHub commits, Skillz events, and more. Use when combining data for reports, dashboards, or analysis.
Build compelling data-driven narratives and stories. Use when presenting analysis results, creating reports, or communicating data insights through storytelling frameworks.
Convert natural language queries to SQL for PostgreSQL databases. Use this skill when users want to query pg_mcp_test_small (blog), pg_mcp_test_medium (ecommerce), or pg_mcp_test_large (ERP) databases using natural language descriptions. Generates safe, read-only SQL queries with validation and execution.
Runs Exploratory Data Analysis (EDA) following the mandatory validation workflow. Use when performing data analysis, exploring datasets, validating data quality, or when the user mentions EDA, data exploration, sanity checks, or data validation. Always run before main analysis queries.
Analyzes CSV files, generates summary stats, and plots quick visualizations using Python and pandas.
Activate when user needs custom Google Ads data analysis, advanced reporting, or specific metric queries. Provides GAQL query building and execution guidance.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Analyze and improve database performance through safe, measurable query, index, and configuration optimizations.
Knowledge base for measuring, analyzing, and optimizing content performance. Use when tracking metrics, analyzing patterns, or making data-driven content decisions.
Tracks financial transactions, categorizes expenses, monitors budgets, and audits subscriptions. Use when processing bank transactions, categorizing expenses, tracking revenue, or analyzing financial data.
Analyze and interpret data to generate meaningful insights using statistical methods and visualization. Use when working with datasets, metrics, statistics, or when insights from data are needed.
Create reactive marimo notebooks for data analysis and visualization. Use when the user asks to create a marimo notebook, work with reactive notebooks, or build interactive data visualizations. User-invocable with /marimo command.
Extract and prepare study data for meta-analysis including effect size calculation, variance estimation, and handling missing data. Use when users need to convert reported statistics into analyzable format or calculate effect sizes from raw data.
Explore Hive tables and datasets in the Wellcome Collection data warehouse. Use this skill to query tables, understand schemas, or analyze data distributions. Invoke with /data-explore.
Exploratory Data Analysis for tabular data. Use this skill when analyzing value distributions, checking for missing data, computing correlations, examining class balance, or generating data quality reports.
Connect to and inspect data sources. Use this skill when you need to verify data access, inspect table schemas, check row counts, or understand the structure of a dataset before performing analysis.
Use when performing meta-analysis, pooling study data, generating forest plots, funnel plots, assessing heterogeneity, or conducting subgroup and sensitivity analyses. Invoke for any statistical synthesis of multiple studies.
Master SOTA data prep for Kaggle comps: automated EDA (Sweetviz), cleaning (Pyjanitor), and feature selection (Polars + XGBoost) for medium datasets (100MB–5GB) in Colab.
Data analysis workflows and patterns for exploring, transforming, and visualizing data. Use when working with data, creating reports, or when users mention "data analysis", "analyze data", "data exploration", or "reporting".
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Database maintenance and health checks. Use when user needs migration safety, data integrity, index optimization, or says "check database", "optimize DB", "migration rollback", "data consistency", "database health".
Generate DuckDB SQL queries. Use when user asks about DuckDB queries, data analysis, exploring .ddb database files, CSV files, Parquet files, wants help editing/improving SQL, asks to use the duckdb skill, references duckdb assets, or wants to initialize/setup duckdb analysis.
Generate comprehensive PySpark-based data quality validation tests for Databricks tables. Use when creating automated tests for data completeness, accuracy, consistency, and conformity, or when user mentions test generation, data validation, quality monitoring, or PySpark test frameworks.
Database schemas, table specifications, and migration information. Use when working with database structure or queries.
Expert assistance with GraphPad Prism scientific graphing and statistics software, specializing in Prism scripting for automation, batch processing, and data analysis workflows.
Generates professional markdown comparison report with tables, executive summary, and verdict by use case. Use when user asks to 'generate report', 'create comparison report', 'synthesize comparison', 'write comparison', or when orchestrator has completed all data collection. Creates structured report with specs tables, pros/cons, pricing analysis, and actionable recommendations.
Clean and standardize vehicle insurance CSV/Excel data. Use when handling missing values, fixing data formats, removing duplicates, or standardizing fields. Mentions "clean data", "handle nulls", "standardize", "duplicates", or "normalize".
Process vehicle insurance Excel data using Pandas - file handling, data cleaning, merging, validation. Use when processing Excel/CSV files, handling data imports, implementing business rules (negative premiums, zero commissions), debugging data pipelines, or optimizing Pandas performance. Keywords: data_processor.py, Excel, CSV, Pandas, merge, deduplication, date normalization.
Create reactive Python notebooks for IMSA racing data analysis using marimo. Use for building interactive filtering UIs (seasons, classes, events), connecting to DuckDB databases, creating reactive visualizations, and performing data analysis with automatic cell re-execution. Includes templates, patterns, and IMSA-specific workflows.