Discover and use technical skills to extend Claude's capabilities
433 Technical Skills Available
Code exploration strategies for understanding unfamiliar codebases. Use when exploring unknown code, debugging, or analyzing architecture.
This skill should be used when the user asks to "/gobby metrics", "tool metrics", "usage stats", "performance report". View tool usage metrics, performance statistics, and identify failing tools.
JSON querying tools and patterns using DuckDB and jaq for data extraction and transformation. Load when querying JSON data or writing data pipelines.
Develop causal diagrams (DAGs) from social-science research questions and literature, then render publication-ready figures using Mermaid, R, or Python.
Generate phase portraits for 2D dynamical systems. Use when visualizing vector fields, nullclines, and trajectories.
Analyze OpenCode conversation history to identify themes and patterns in user messages. Use when asked to analyze conversations, find themes, review how a user steers agents, or extract insights from session history.
Use the Mixpanel MCP server for analytics. Triggers on mentions of Mixpanel MCP tools, MCP resources, analytics queries via MCP, segmentation, funnels, retention, cohort_comparison, product_health_dashboard, ask_mixpanel, diagnose_metric_drop, guided_analysis, fetch_events, or SQL queries on local DuckDB data.
Generate AntV Infographic syntax outputs. Use when asked to turn user content into the Infographic DSL (template selection, data structuring, theme), or to output `infographic <template>` plain syntax. The output can be embedded in markdown using ```infographic code blocks.
Expert guidance for data analysis, visualization, and Jupyter Notebook development with pandas, matplotlib, seaborn, and numpy.
Guidelines for data analysis and Jupyter Notebook development with pandas, matplotlib, seaborn, and numpy.
Implement analytics, data analysis, and visualization best practices using Python, Jupyter, and modern data tools.
Patterns for generating synthetic data for ML training, testing, and privacy. Covers LLM-based generation, tabular synthesis, and quality validation. Use when "synthetic data, generate training data, fake data generation, data augmentation, SDV, Gretel, test data, privacy-preserving data, " mentioned.
Comprehensive statistical analysis for research, experiments, and data science. Covers hypothesis testing, effect sizes, confidence intervals, Bayesian methods, regression, and advanced techniques. Emphasizes correct interpretation and avoiding common statistical mistakes. Use when ", " mentioned.
Conversational data analysis using natural language queries on DataFrames. Chat with your data using LLMs to generate insights, create visualizations, and explain code.
Deep analysis of ClickHouse server logs, debug traces, and low-level diagnostics. Use for investigating server log messages and trace analysis.
Real-time monitoring of ClickHouse metrics, events, and asynchronous metrics. Use for load average, connections, queue monitoring, and resource saturation.
Diagnose ClickHouse RAM usage, OOM errors, memory pressure, and allocation patterns. Use for memory-related issues and out-of-memory errors.
Diagnose ClickHouse SELECT query performance, analyze query patterns, identify slow queries, and find optimization opportunities. Use for query latency and timeout issues.
Create new community metrics by adding enum values, recording functions, wiring, backfill migrations, and API integration. Use when adding new community metrics, creating metrics, or tracking community activity.
Create interactive Desmos graphs in Obsidian using desmos-graph code blocks. Use when visualizing functions, parametric curves, inequalities, or mathematical relationships with customizable styling and settings.
This skill should be used when users need to write, review, or debug Stata code for data cleaning and analysis. Use this skill for tasks involving data import, variable management, data documentation, merging/appending datasets, creating analysis variables, and following IPA/DIME Analytics coding standards. This skill should be invoked when working with .do files, .dta files, or any Stata-related data processing tasks.
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Use this skill when the user asks to "create corner plots", "visualize posteriors", "plot chains", "analyze nested sampling output", "plot marginal distributions", "compare prior and posterior", "make triangle plots", or work with posterior samples from PolyChord, MultiNest, UltraNest, Cobaya, or other Bayesian inference tools. Provides complete guidance for creating publication-quality posterior visualizations from nested sampling or MCMC chains.
Analyze Mixpanel analytics data using the mixpanel_data Python library or mp CLI. Use when working with Mixpanel event data, user profiles, funnels, retention, cohorts, segmentation queries, JQL scripts, or SQL analysis on local DuckDB. Triggers on mentions of Mixpanel, event analytics, funnel analysis, retention curves, user behavior tracking, JQL queries, filter expressions, 'fetch data from Mixpanel', 'query Mixpanel with SQL', 'run DuckDB queries on events', 'analyze user behavior', 'export
Comprehensive statistics expert covering probability theory, distributions, hypothesis testing, regression, and Bayesian methods
Python fundamentals, data structures, OOP, and data science libraries (Pandas, NumPy). Use when writing Python code, data manipulation, or algorithm implementation.
Use when processing 50+ items, analyzing CSV/log files, executing code in sandbox, or searching for tools. Load for data processing tasks. Achieves 98%+ token savings via in-sandbox execution, progressive disclosure, and pagination. Supports heredocs for multi-line bash.
Expert assistance for importing data to Treasure Data using the JavaScript SDK. Use this skill when users need help with browser-based event tracking, page analytics, client-side data collection, or implementing TD's JS SDK for web applications.
EDA, dashboards, Matplotlib, Seaborn, Plotly, and BI tools. Use for creating visualizations, exploratory analysis, or dashboards.
Structured reasoning tools for complex problem analysis with observation, hypothesis, analysis, and conclusion steps. Use when analyzing complex problems, debugging difficult issues, or making important decisions.
Track and optimize application response times across API endpoints, database queries, and service calls. Use when monitoring performance or identifying bottlenecks. Trigger with phrases like "track response times", "monitor API performance", or "analyze latency".
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
Split datasets into training, validation, and testing sets for ML model
Overview of protected quality thresholds and quick reference for all quality tools. Use when you need to understand quality metrics, run comprehensive quality checks, or learn which specialized skill to use. For specific issues, use dedicated skills (deptrac-fixer for Deptrac, complexity-management for PHPInsights, testing-workflow for coverage).
dbt (data build tool) patterns for data transformation and analytics engineering. Use when building data models, implementing data quality tests, or managing data transformation pipelines.
Create factual working journal entries in Notes/WorkingJournal/ after completing analysis work. Use when user asks to "summarize work", "document results", or "create working journal entry". Ensures code is committed, copies figures to attachments, and creates objective summaries with citations.
Core documentation generation patterns and framework for Treasure Data pipeline layers. Provides shared templates, quality validation, testing framework, and Confluence integration used by all layer-specific documentation skills.
Conduct sensitivity analyses to test robustness of findings. Use when: (1) Testing assumption violations, (2) Meta-analysis robustness, (3) Handling missing data, (4) Examining outliers.
Conduct subgroup analyses to examine effect moderation. Use when: (1) Testing pre-specified moderators, (2) Exploring heterogeneity, (3) Identifying differential effects, (4) Meta-analysis synthesis.