Metrics
by GobbyAI
This skill should be used when the user asks to "/gobby metrics", "tool metrics", "usage stats", "performance report". View tool usage metrics, performance statistics, and identify failing tools.
Skill Details
Repository Files
1 file in this skill directory
name: metrics description: This skill should be used when the user asks to "/gobby metrics", "tool metrics", "usage stats", "performance report". View tool usage metrics, performance statistics, and identify failing tools. category: core
/gobby metrics - Metrics and Statistics Skill
This skill retrieves usage metrics via the gobby-metrics MCP tools (e.g., get_tool_metrics(), get_top_tools(), get_failing_tools()). Parse the user's input to determine which subcommand to execute.
Subcommands
/gobby metrics tools - Tool usage statistics
Call get_tool_metrics with:
server_name: Optional filter by server nametool_name: Optional filter by tool nameproject_id: Optional project scope
Returns per-tool statistics:
- Call count
- Success rate
- Average latency
- Last used
Example: /gobby metrics tools → get_tool_metrics()
Example: /gobby metrics tools gobby-tasks → get_tool_metrics(server_name="gobby-tasks")
Example: /gobby metrics tools gobby-tasks create_task → get_tool_metrics(server_name="gobby-tasks", tool_name="create_task")
/gobby metrics top - Get top tools by usage
Call get_top_tools with:
limit: Max tools to show (default 10)order_by: Sort by "usage" (default), "success_rate", or "latency"project_id: Optional project scope
Returns tools ranked by the specified metric.
Example: /gobby metrics top → get_top_tools()
Example: /gobby metrics top 20 by latency → get_top_tools(limit=20, order_by="latency")
/gobby metrics failing - Get failing tools
Call get_failing_tools with:
threshold: Failure rate threshold (default 0.1 = 10%)limit: Max resultsproject_id: Optional project scope
Returns tools with failure rates above the threshold.
Example: /gobby metrics failing → get_failing_tools()
Example: /gobby metrics failing 0.05 → get_failing_tools(threshold=0.05)
/gobby metrics success <server> <tool> - Get tool success rate
Call get_tool_success_rate with:
server_name: (required) Server nametool_name: (required) Tool nameproject_id: (optional) Project ID - automatically inferred from current session context if not provided
Returns detailed success rate for a specific tool. The project_id is optional because it can be inferred from the current working directory's .gobby/project.json file.
Example: /gobby metrics success gobby-tasks create_task → get_tool_success_rate(server_name="gobby-tasks", tool_name="create_task")
/gobby metrics reset - Reset metrics
Call reset_metrics with:
project_id: Optional - reset for specific projectserver_name: Optional - reset for specific servertool_name: Optional - reset for specific tool
Clears metrics data. Can scope to project, server, or specific tool.
Example: /gobby metrics reset → reset_metrics()
Example: /gobby metrics reset gobby-tasks → reset_metrics(server_name="gobby-tasks")
/gobby metrics cleanup - Clean up old metrics
Call cleanup_old_metrics to delete metrics older than retention period (default 7 days).
Example: /gobby metrics cleanup → cleanup_old_metrics()
/gobby metrics retention - Get retention statistics
Call get_retention_stats to see metrics age distribution and storage info.
Example: /gobby metrics retention → get_retention_stats()
Response Format
After executing the appropriate MCP tool, present the results clearly:
- For tools: Table with tool name, call count, success rate, avg latency
- For top: Ranked list with the sorting metric highlighted
- For failing: Table of failing tools with failure rates
- For success: Detailed success rate with context
- For reset: Confirmation of what was reset
- For cleanup: Summary of deleted metrics
- For retention: Statistics about metrics age
Metrics Concepts
- Call count: Total number of tool invocations
- Success rate: Percentage of calls that completed without error
- Latency: Response time in milliseconds
- Retention: How long metrics are kept (default 7 days)
Error Handling
If the subcommand is not recognized, show available subcommands:
- tools, top, failing, success, reset, cleanup, retention
Related Skills
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
Tensorboard
Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit
Deeptools
NGS analysis toolkit. BAM to bigWig conversion, QC (correlation, PCA, fingerprints), heatmaps/profiles (TSS, peaks), for ChIP-seq, RNA-seq, ATAC-seq visualization.
Scvi Tools
This skill should be used when working with single-cell omics data analysis using scvi-tools, including scRNA-seq, scATAC-seq, CITE-seq, spatial transcriptomics, and other single-cell modalities. Use this skill for probabilistic modeling, batch correction, dimensionality reduction, differential expression, cell type annotation, multimodal integration, and spatial analysis tasks.
Statsmodels
Statistical modeling toolkit. OLS, GLM, logistic, ARIMA, time series, hypothesis tests, diagnostics, AIC/BIC, for rigorous statistical inference and econometric analysis.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Neurokit2
Comprehensive biosignal processing toolkit for analyzing physiological data including ECG, EEG, EDA, RSP, PPG, EMG, and EOG signals. Use this skill when processing cardiovascular signals, brain activity, electrodermal responses, respiratory patterns, muscle activity, or eye movements. Applicable for heart rate variability analysis, event-related potentials, complexity measures, autonomic nervous system assessment, psychophysiology research, and multi-modal physiological signal integration.
Statistical Analysis
Statistical analysis toolkit. Hypothesis tests (t-test, ANOVA, chi-square), regression, correlation, Bayesian stats, power analysis, assumption checks, APA reporting, for academic research.
