Anthropic Skills
2689 skills. Last updated 2026-03-16
Discover and use Anthropic Skills to extend Claude's capabilities with creative, technical, and enterprise workflows.
Create diagrams and visual explanations with iterative render-and-check workflow. Use when asked to 'create a diagram', 'Venn diagram', 'flow chart', 'architecture diagram', 'visualize this'. Renders SVG to PNG, self-critiques using CRAP principles, iterates until right. Composes with brand skills for styling. (user)
A deep analysis mode for the Google AI (Gemini) to fully deconstruct the codebase and market conditions without editing code.
Use when asked for "deep research", "thorough analysis", "comprehensive report", "investigate", "due diligence", or when multiple sources are needed to answer complex questions. Produces well-sourced research reports through iterative refinement.
Generate well-structured Mermaid diagrams for visual documentation. Use when designing systems, documenting workflows, creating architecture diagrams, or any task requiring visual representation of processes, relationships, or structures. Triggers include requests like "create a diagram", "visualize this", "show the flow", "draw the architecture", or when explaining complex systems that benefit from visual aids.
Debug Pandas issues systematically. Use when encountering DataFrame errors, SettingWithCopyWarning, KeyError on column access, merge and join mismatches with unexpected NaN values, memory errors with large DataFrames, dtype conversion issues, index alignment problems, or any data manipulation errors in Python data analysis workflows.
Create Excel files with multiple sheets and cross-sheet formulas including COUNTIFS, VLOOKUP, and MATCH. Keep source data in one sheet and create summary sheets with formulas referencing the source data.
Compare friction pipeline outputs across different runs, thinking levels, or seeds. Use when comparing visa research results, checking variance across thinking levels (minimal/low/medium), analyzing fee discrepancies, or reviewing route selection consistency.
A data analyst skill that guides Claude through structured, professional data analysis workflows. Use this skill when the user requests data analysis work including analyze, query, dashboard, metrics, EDA, cohort, funnel, or A/B testing.
Expert on external API integration, backend proxy architecture, backend functions, rate limiting, and data fetching strategies. Use when integrating with external services, designing backend functions, or understanding how data flows from external sources to the database. References docs/06_integration_spec.md.
Navigate Supabase database tables, relationships, and query patterns. Schema reference for Empathy Ledger.
Systematic exploratory data analysis following best practices. Use when analyzing any dataset to understand structure, identify data quality issues (duplicates, missing values, inconsistencies, outliers), examine distributions, detect correlations, and generate visualizations. Provides comprehensive data profiling with sanity checks before analysis.
Knowledge about pyecharts chart creation, HTML report generation, and visualization best practices
Generate and interpret forest plots for meta-analysis visualization using R and the metafor package. Use when users need to create forest plots, understand visual representation of pooled effects, or interpret study weights and confidence intervals.
Converts Python report scripts (Elasticsearch queries + email output) into Grafana Jsonnet dashboards with dual-datasource support (ClickHouse + Elasticsearch ES7/ES8). Use when migrating scheduled email reports to real-time monitoring dashboards, building multi-datasource observability views, or converting report calculations to interactive panels.
This skill generates planning documents (기획서) in PowerPoint format from analyzed screen data and project artifacts. It should be used by the auto-draft-orchestrator agent during Phase 4 to create the final PPT output. Not intended for direct user invocation.
This skill should be used when performing local data exploration, profiling, quality analysis, or transformation tasks using DuckDB. It handles CSV, Parquet, and JSON files, provides automated data quality reports, supports complex JSON transformations, and generates interactive HTML reports for data analysis.
Professional document generator for Active Directory health assessments. Creates executive-quality DOCX reports with consistent branding, proper data visualization, severity-based formatting, and actionable remediation guidance. Use when generating assessment reports from OpsIdentity JSON data or improving document output quality.
HTML infographic templates. Use when creating carousel slides or infographics.
Linear algebra operations in NumPy, including matrix multiplication, SVD, system solving, and least squares fitting. Triggers: linalg, matrix multiplication, SVD, eigenvalues, matrix decomposition, lstsq, multi_dot.
Sorting and searching algorithms including O(n) partitioning, binary search, and hierarchical multi-key sorting. Triggers: sort, argsort, partition, searchsorted, lexsort, nan sort order.
This skill should be used when the user asks "Chart.js options", "Chart.js animations", "Chart.js legend", "Chart.js tooltip", "Chart.js title", "disable Chart.js animation", "customize Chart.js tooltip", "Chart.js responsive", "Chart.js aspect ratio", "Chart.js interactions", "Chart.js hover", "Chart.js click events", "Chart.js layout", "Chart.js padding", "Chart.js font", "Chart.js colors", "Chart.js external tooltip", "Chart.js custom legend", "Chart.js transitions", or needs help configuring
Generate test report with clear visual indicators - ✅ for pass, ❌ for fail. Summarize results, document failures, provide recommendations.
Applies Fabric AI prompt patterns for summarization, analysis, extraction, code review, and content creation. Use when asked to summarize content, analyze documents, extract insights, review code, create documentation, or transform text. Patterns are in data/patterns/.
dlt (data load tool) patterns for SignalRoom ETL pipelines. Use when creating sources, debugging pipeline failures, understanding schema evolution, or implementing incremental loading.
Collects, analyzes, and reports software metrics for data-driven decision making and continuous improvement
Create professional diagrams using Mermaid or DOT/Graphviz. Mermaid for flowcharts, sequences, classes, ER, Gantt, architecture with semantic coloring and WCAG AA accessibility. DOT/Graphviz for pure network graphs, semantic webs, and maximum layout control.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Create D3 questions with double number lines showing proportional relationships. Students complete missing values on parallel number lines.
Génération et mise à jour de rapports hebdomadaires avec CFD (Cumulative Flow Diagram). Activer ce skill quand l'utilisateur parle de récapitulatif, bilan hebdomadaire, rapport de la semaine, ou CFD.
World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
Expert guidance for Polars dataframe manipulation in Python. Use this skill when working with dataframes, data processing, ETL pipelines, or any task involving tabular data manipulation. Provides best practices, performance optimization patterns, and comprehensive API usage for the Polars library.
Analyzes performance benchmarks from CUDA, CPU, memory tests. Parses output, identifies bottlenecks, tracks metrics over time, generates optimization insights.
Reference for how I like to dance and call square dances.
