Discover and use technical skills to extend Claude's capabilities
433 Technical Skills Available
Expert guidance for connecting Power Query (Power BI, Excel) to Frappe apps and reports. Use when building Power Query M code for Frappe data access, integrating Frappe reports with Power BI/Excel, implementing authentication for Power Query connections, handling heavy/long-running reports with report_long_polling API to avoid timeouts, applying column types and transformations, or troubleshooting Power Query caching and connection issues.
This skill should be used when the user asks to "create data pipeline", "ETL process", "data lake", "data warehouse", "Apache Spark", "Airflow DAG", "data streaming", "Kafka pipeline", "dbt models", "data quality", or needs help with data engineering and pipeline development.
Inspect Excel file structure (sheets, data types, VBA code) to help AI understand the workbook before writing code. Use when you need to understand an Excel file's schema, analyze data structure, or extract VBA code for modification.
A/B testing framework for safe experimentation with statistical validation
Generate client-friendly AI visibility reports from audit data. Use when creating client reports, visibility reports, or presenting audit results in a client-facing format. Triggers on "client report", "generate report", "visibility report", "create report", "report for client".
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Use dtctl CLI tool for querying observability data in Dynatrace via DQL (logs, metrics, traces, ...) and to manage Dynatrace platform resources (workflows, dashboards, notebooks, SLOs, settings, buckets, lookup tables).
Integrate Databuddy analytics into applications using the SDK or REST API. Use when implementing analytics tracking, feature flags, custom events, Web Vitals, error tracking, LLM observability, or querying analytics data programmatically.
Extract patterns, insights, and knowledge from text content. Use when asked to digest, extract, summarize, analyze patterns in code, conversations, documents, or decisions. Stores extracted knowledge for future retrieval.
Use the Databuddy REST API for programmatic access to analytics data. Use when querying analytics data, building custom dashboards, sending events via API, or integrating analytics into backend services.
Generates ASCII-based architecture, data flow, or call graph diagrams for software projects. Use this when the user asks to "visualize", "draw", or "show the relationship" between code modules, especially in a terminal environment where images cannot be rendered.
Symmetry analysis, Lyapunov exponent chaos detection, and period detection APIs. Use when analyzing CA states, detecting patterns, measuring stability, or classifying dynamic behavior.
High-performance DataFrame library usage. Covers Lazy API, Wrangling, Aggregation.
Use when organizing experiment logs, results, and metadata for Python research code.
Conversion funnel analysis with drop-off investigation. Use when analyzing multi-step processes, identifying conversion bottlenecks, A/B testing funnel performance, or optimizing user journeys.
Customer/user segmentation with actionable insights. Use when identifying distinct customer groups, analyzing segment-specific behavior, profiling high-value segments, or testing segmentation hypotheses.
Expert analytics engineering covering data modeling, dbt development, data transformation, and semantic layer management.
Unified math capabilities - computation, solving, and explanation. I route to the right tool.
Evaluates investment risks, performs Monte Carlo simulations, and generates risk reports. Use when analyzing portfolio risk, stress testing, or regulatory compliance.
R 4.4+ development specialist covering tidyverse, ggplot2, Shiny, and data science patterns. Use when developing data analysis pipelines, visualizations, or Shiny applications.
This skill should be used when the user asks to "track ROI", "measure cost savings", "calculate return on investment", "analyze efficiency gains", "measure automation impact", or needs comprehensive ROI measurement and business impact analysis for development and automation initiatives.
Generate R code for meta-analysis using the metafor package, including data preparation, model fitting, visualization, and sensitivity analyses. Use when users need executable R code for their meta-analysis workflow.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Codex needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Query and analyze data from Metabase, create/update questions and dashboards, access the Metabase REST API, and troubleshoot Metabase SQL queries. Use when user mentions Metabase, dashboards, metrics, or asks to fetch/analyze business intelligence data.
Review Azure Data Explorer dashboards for Copilot CLI metrics and update Google Sheets scorecard. Connects to existing Chrome session for authentication.
Automate Splunk queries and analyze results using Chrome DevTools MCP. Use when the user wants to run Splunk searches, export log data, or analyze Splunk results. Triggers on requests like "check error rates", "search Splunk for X", "run a Splunk query", "analyze logs from Splunk", or "find errors in payment-service".
Comprehensive skill for interacting with Grafana's HTTP API to manage dashboards, data sources, folders, alerting, annotations, users, teams, and organizations. Use when Claude needs to (1) Create, read, update, or delete Grafana dashboards, (2) Manage data sources and connections, (3) Configure alerting rules, contact points, and notification policies, (4) Work with folders and permissions, (5) Manage users, teams, and service accounts, (6) Create or query annotations, (7) Execute queries again
Set-theoretic operations for finding unique elements, membership testing, and array intersections. Triggers: unique, isin, intersect1d, setdiff1d, union1d.
Comprehensive guide for TAR UMT Data Science (RDS) students completing their Final Year Project. Use when RDS students need help with (1) understanding FYP processes and requirements, (2) structuring FYP reports and deliverables, (3) writing research-based chapters and thesis, (4) selecting appropriate data science projects, (5) understanding research methodology and theoretical frameworks, (6) conducting experiments and statistical analysis, (7) preparing for system testing and presentations, (
Analyzes Rust code for performance bottlenecks, memory inefficiencies, and optimization opportunities. Use when discussing performance, slow code, memory usage, profiling, benchmarks, or optimization.
Use RSPress built-in components and MDX features in documentation. Use when adding interactive elements like tabs, badges, steps, callouts, or code groups to documentation pages.
CellChat cell-cell communication analysis toolkit - complete documentation with precise file name-based categorization
Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights
Medical PowerPoint presentation generation with clinical data visualization. Creates professional presentations from FHIR resources, lab results, and clinical data. Use when generating patient summaries, care plans, medical reports, or clinical presentations with proper healthcare formatting.
Analyze completed ECIR (Engineering Change Impact Report) Excel files to extract insights, identify trends, and detect patterns across single or multiple reports. Use when the user asks to analyze ECIR reports, find trends in ECIRs, compare multiple ECIRs, identify cost variance patterns, or generate insights from completed ECIR Excel files. This skill works with the output files from the ECIR Advanced Tool.
This skill should be used when users request comprehensive analysis of technology ecosystems, comparing multiple libraries/frameworks/tools with quantitative metrics from GitHub and web research. Trigger words include "ecosystem analysis", "compare libraries", "analyze React/Vue/Python ecosystem", "trending libraries", "technology stack comparison", or requests to evaluate multiple technical tools with data-driven insights.
Analyzes customer behavior, needs, pain points, and sentiment through review mining, social listening, buyer persona development, and jobs-to-be-done framework. Use when the user requests customer analysis, voice of customer research, buyer personas, pain point analysis, or wants to understand customer needs and motivations.