Discover and use data skills to extend Claude's capabilities
665 Data Skills Available
Analyzes Axiom query patterns to find unused data, then builds dashboards and monitors for cost optimization. Use when asked to reduce Axiom costs, find unused columns or field values, identify data waste, or track ingest spend.
Audits database data quality (nulls, duplicates, orphans, invalid ranges) and produces a short findings report with remediation queries. Use when debugging data issues, validating migrations, or verifying analytics correctness.
Financial accounts, transactions, net worth tracking, tax profiles, income, and expenses. Use when working with the core financial data management features.
Parse, analyze, transform, and manipulate CSV files. Use for data processing, cleaning, and CSV operations.
Work with Excel spreadsheets (XLSX/XLS/CSV) - read data, create spreadsheets, convert formats, analyze data, and generate reports. Use when the user asks to work with Excel files or spreadsheet data.
This skill should be used when the user asks to "model trends with limited data", "three-valued logic analysis", "scenario generation", "transitional graphs", "qualitative trend analysis", "uncertain data analysis", "minimal-information modeling", or needs guidance on trend-based modeling using INC/DEC/CONST logic, scenario planning with limited quantitative data, or generating transitional scenario graphs.
This skill should be used when the user asks to "write SQL", "optimize a query", "explain this SQL", or needs help with database queries and SQL best practices.
Monitor applications, investigate performance issues, and analyze observability data in New Relic. Use when the user needs APM metrics, error tracking, infrastructure monitoring, or incident analysis.
Database querying and analysis using SQLAlchemy 2.0+ with support for PostgreSQL, MySQL, SQLite, and SQL Server. Use when tasks require: (1) Querying databases via SQL, (2) Reading data into DataFrames for analysis, (3) Performing database operations with proper transaction handling. Environment variable with connection string must be set (check resources/RESOURCES.md for available databases and schemas).
Expert-level Tableau Desktop/Server, calculated fields, LOD expressions, dashboards, data blending, and performance optimization
Expert-level Looker BI, LookML, explores, dimensions, measures, dashboards, and data modeling
Expert-level Grafana dashboards, visualization, data sources, alerting, and production operations
Extract metadata from SPSS .sav files as JSON using pyreadstat
Conducts Exploratory Data Analysis (EDA) on datasets. Use when the user asks to "explore", "clean", or "visualize" a new CSV or dataset.
Fetches and processes NBA player and team statistics. Use when the user wants to analyze basketball data for the sports picker model.
Visualize audio frequency/time-domain data synchronized with React updates.
Create production-ready Grafana dashboards for TRON team services including consumer dashboards, task metadata, RQ rules manager, and Kafka metrics. Use when building dashboards for task event consumers or RMS TRON services.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
This skill should be used when you need to refresh the Dashboard with current system status, pending actions, recent activity, and statistics. It automatically gathers data from action folders (Needs_Action, Plans, Done), checks watcher status, and updates the Dashboard.md file. Use this skill when the user asks to refresh/update the dashboard, check system status, or get an overview of pending tasks and recent activity.
Create Domino Launchers - parameterized web forms for self-service job execution. Enable business users to run analyses, generate reports, and trigger batch predictions without coding. Covers parameter types, email notifications, result delivery, and access control. Use when building self-service data products or enabling non-technical users.
Core data analytics concepts, Excel/Google Sheets fundamentals, and data collection techniques
SQL database querying, optimization, and data management for analytics
Master Excel for data analysis with pivot tables, formulas, Power Query, and advanced Excel techniques.
Work with Reference Tables (static CSV lookup data) using OPAL to enrich datasets with descriptive information. Use when you need to map IDs to human-readable names, add static metadata from CSV uploads, or perform lookups without temporal considerations. Covers both explicit and implicit lookup patterns, column name matching, and when to choose Reference Tables vs Resources vs Correlation Tags.
Work with Resource datasets (mutable state tracking) using OPAL temporal joins. Use when you need to enrich Events/Intervals with contextual state information, track resource state changes over time, or navigate between datasets using temporal relationships. Covers temporal join mechanics (lookup, join, follow), automatic field matching, and when to use Resources vs Reference Tables.
Automated CSV data analysis with statistical insights, pattern detection, visualization recommendations, and anomaly detection
Aggregate and summarize event datasets (logs) using OPAL statsby. Use when you need to count, sum, or calculate statistics across log events. Covers make_col for derived columns, statsby for aggregation, group_by for grouping, aggregation functions (count, sum, avg, percentile), and topk for top N results. Returns single summary row per group across entire time range. For time-series trends, see time-series-analysis skill.
Data analysis expert for SQL queries, BigQuery operations, and data insights. Use proactively for data analysis tasks and queries.
Analyze data files (CSV, JSON) and generate insights, summaries, and statistical analysis
Generate McKinsey-style board presentation PPTs from weekly auto insurance data. Automatically calculates 16+ KPIs, creates executive-level slides with actionable insights, and supports week-over-week comparisons. Use when user uploads insurance cost data (Excel/CSV) and requests board report, weekly presentation, executive briefing, or mentions keywords like 董事会汇报, 周报PPT, 经营分析演示, McKinsey-style reports.
The core skill for working within the bigquery-etl repository. Use this skill when understanding project structure, conventions, and common patterns. Works with model-requirements, query-writer, metadata-manager, sql-test-generator, and bigconfig-generator skills.
Use this skill when gathering requirements for new BigQuery data models OR when asked to edit existing queries in bqetl. For new models, guides structured requirements interviews. For existing queries, understands current model, checks downstream dependencies, and gathers requirements for changes. Works as pre-planning before query-writer skill.
专业数据可视化专家,精通现代图表库、仪表板设计和交互式数据展示。能够将复杂数据转化为直观、美观且富有洞察力的可视化作品。
Generate, optimize, and explain SQL queries with best practices. Use when writing database queries or optimizing SQL performance.
Data engineering agent for ETL pipelines, data warehousing, and analytics
Lightning-fast DataFrame library written in Rust for high-performance data manipulation and analysis. Use when user wants blazing fast data transformations, working with large datasets, lazy evaluation pipelines, or needs better performance than pandas. Ideal for ETL, data wrangling, aggregations, joins, and reading/writing CSV, Parquet, JSON files.
Fast in-process analytical database for SQL queries on DataFrames, CSV, Parquet, JSON files, and more. Use when user wants to perform SQL analytics on data files or Python DataFrames (pandas, Polars), run complex aggregations, joins, or window functions, or query external data sources without loading into memory. Best for analytical workloads, OLAP queries, and data exploration.