Nixtla Baseline Review

by intent-solutions-io

skill

Analyze Nixtla baseline forecasting results (sMAPE/MASE on M4 or other

Skill Details

Repository Files

4 files in this skill directory


name: nixtla-baseline-review description: Analyze Nixtla baseline forecasting results (sMAPE/MASE on M4 or other benchmark datasets). Use when the user asks about baseline performance, model comparisons, or metric interpretation for Nixtla time-series experiments. Trigger with "baseline review", "interpret sMAPE/MASE", or "compare AutoETS vs AutoTheta". allowed-tools: Read,Grep,Bash(ls:*) version: 1.0.0 author: Jeremy Longshore jeremy@intentsolutions.io license: MIT

Nixtla Baseline Review Skill

Overview

Analyze baseline forecasting results from the nixtla-baseline-m4 workflow. Interpret metrics, compare models, surface patterns, and recommend next steps.

When to Use This Skill

Activate this skill when the user:

  • Asks "Which baseline model performed best?"
  • Requests interpretation of sMAPE or MASE metrics
  • Wants to compare AutoETS vs AutoTheta vs SeasonalNaive
  • Says "Explain these baseline results"
  • Needs guidance on model selection based on baseline performance

For StatsForecast Power Users

This baseline lab is built on Nixtla's statsforecast library. What this plugin provides:

  • Real statsforecast models (SeasonalNaive, AutoETS, AutoTheta)
  • M4 dataset integration via datasetsforecast
  • Standard train/test evaluation with sMAPE and MASE metrics
  • Power-user controls: models, freq, season_length parameters
  • Demo preset mode for GitHub-style presentations

Important Disclaimers:

  • This is a community-built integration, not an official Nixtla product
  • Built to demonstrate Claude Code plugin capabilities with real Nixtla libraries
  • For production use cases, always validate against official Nixtla examples

Advanced Example Questions:

  • "Compare AutoETS vs AutoTheta on MASE only, and show me which series AutoETS loses on"
  • "Identify any series where SeasonalNaive still wins on sMAPE - what patterns do they share?"
  • "Given these statsforecast metrics, which series would you route to AutoTheta vs AutoETS and why?"

Prerequisites

  • Baseline results must exist in nixtla_baseline_m4/ directory
  • At minimum, results_*.csv file must be present
  • CSV format: columns series_id, model, sMAPE, MASE

Instructions

Step 1: Locate Results Files

Use Read or Bash tool to find baseline results:

# Check for results directory
ls -la nixtla_baseline_m4/

# Identify most recent results file
ls -t nixtla_baseline_m4/results_*.csv | head -1

Expected files:

  • results_M4_Daily_h{horizon}.csv - Full metrics table
  • summary_M4_Daily_h{horizon}.txt - Text summary (optional)
  • benchmark_report_*.md - Formatted report (optional)

If files are missing, inform the user they need to run /nixtla-baseline-m4 first.

Step 2: Load and Parse Metrics

Read the metrics CSV file:

# View first few rows to confirm format
head -10 nixtla_baseline_m4/results_M4_Daily_h*.csv

Expected CSV structure:

series_id,model,sMAPE,MASE
D1,SeasonalNaive,15.23,1.05
D1,AutoETS,13.45,0.92
D1,AutoTheta,12.34,0.87
D2,SeasonalNaive,18.67,1.23
...

Step 3: Calculate Summary Statistics

For each model (SeasonalNaive, AutoETS, AutoTheta), calculate:

  • Mean sMAPE: Average across all series
  • Median sMAPE: Middle value (less sensitive to outliers)
  • Standard Deviation: Measure of consistency
  • Series Won: Count how many series each model performed best on

Step 4: Interpret Metrics

For detailed metric interpretation, see resources/METRIC_INTERPRETATION.md.

Quick Reference:

  • sMAPE: 0% (perfect) to 200% (worst). Good: < 15%, Acceptable: 15-20%
  • MASE: < 1.0 means better than seasonal naive baseline

Step 5: Generate Structured Explanation

Provide analysis in this format:

## Baseline Performance Analysis

Based on {N} series from M4-Daily with horizon={H}:

**Overall Winner**: {ModelName}
- Mean sMAPE: {X.XX}% (vs {Y.YY}% for second place)
- Wins on {N} out of {Total} series ({Percentage}%)
- Consistent performance (std dev {X.X}%)

**Key Insights**:
1. {Insight about winner's strengths}
2. {Insight about other models or patterns}
3. {Insight about failure cases or opportunities}

**Recommendation**: Use {ModelName} as production baseline. {Additional context}

**Next Steps**:
- {Actionable recommendation 1}
- {Actionable recommendation 2}

Examples

Example 1: Simple Query

User: "Which model was best?"

Response:

Based on the baseline results (50 series, horizon=14):

**AutoTheta won** with average sMAPE of 12.3%
- Outperformed AutoETS (13.5%) and SeasonalNaive (15.2%)
- Most consistent with std dev of 4.2%
- Recommended for production use

Example 2: Detailed Comparison

User: "Compare AutoETS vs AutoTheta performance"

Response:

## AutoETS vs AutoTheta Comparison

| Metric        | AutoETS | AutoTheta | Winner   |
|---------------|---------|-----------|----------|
| Mean sMAPE    | 13.5%   | 12.3%     | Theta ✓  |
| Median sMAPE  | 12.9%   | 11.8%     | Theta ✓  |
| Std Dev       | 5.1%    | 4.2%      | Theta ✓  |
| Series Won    | 18/50   | 32/50     | Theta ✓  |

**Key Differences**:
- Theta handles trends better (exponential smoothing with drift)
- ETS excels on strong seasonality (state space framework)
- Theta more consistent across diverse patterns

**Recommendation**: Use AutoTheta as default, fallback to AutoETS for highly seasonal data.

Example 3: Metric Interpretation

User: "What does sMAPE of 12.3% mean?"

Response:

sMAPE of 12.3% is **good performance** for forecasting:

- This means on average, predictions are off by about 12% from actual values
- For M4 Daily benchmark, anything under 15% is considered good
- This is in the "acceptable" range (10-20%)

Context: If forecasting daily sales of 100 units, a 12.3% sMAPE means your forecast will typically be within ±12 units of the true value.

For more details, see resources/METRIC_INTERPRETATION.md

Advanced Features

Benchmark Reports

If a benchmark report exists (benchmark_report_*.md), use Read tool to view formatted summaries suitable for GitHub issues or documentation.

For details on benchmark reports, see the MCP server documentation.

TimeGPT Showdown

If TimeGPT comparison data is present, incorporate it into your analysis. See resources/TIMEGPT_COMPARISON.md for detailed guidance.

Key points:

  • Check timegpt_status field first
  • Emphasize limited sample size (typically 3-5 series)
  • Use language like "indicative" not "conclusive"
  • Never fabricate TimeGPT metrics that don't exist

GitHub Issue Drafts

Help users create GitHub issue drafts to share results with Nixtla maintainers. See resources/GITHUB_ISSUES.md for complete guidance.

When to suggest:

  • User wants to ask Nixtla maintainers a question
  • User suspects a bug or unexpected behavior
  • User wants to share benchmark results with the community

Output

  • A structured markdown analysis with an overall winner, key insights, and recommended next steps.
  • Optional: a short “power user” section highlighting series-level anomalies and failure cases.

Error Handling

If results files are missing:

I don't see baseline results in nixtla_baseline_m4/.

Please run the baseline command first:
Run: `/nixtla-baseline-m4 horizon=14 series_limit=50`

This will generate the metrics files I need to analyze.

If CSV is malformed:

The results file exists but appears malformed. Expected columns:
- series_id, model, sMAPE, MASE

Please re-run /nixtla-baseline-m4 to regenerate clean results.

Resources

For detailed information on specific topics:

  • Metric interpretation: resources/METRIC_INTERPRETATION.md
  • TimeGPT comparisons: resources/TIMEGPT_COMPARISON.md
  • GitHub issue drafts: resources/GITHUB_ISSUES.md

For complete technical details, see:

  • Architecture: 000-docs/6767-OD-ARCH-nixtla-claude-plugin-poc-baseline-lab.md
  • Planning: 000-docs/6767-PP-PLAN-nixtla-claude-plugin-poc-baseline-lab.md
  • Phase 3 AAR: 000-docs/017-AA-AACR-phase-03-mcp-baselines-nixtla-oss.md
  • Phase 8 AAR: 000-docs/022-AA-AACR-phase-08-timegpt-showdown-and-evals.md

Related Skills

Attack Tree Construction

Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.

skill

Grafana Dashboards

Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.

skill

Matplotlib

Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.

skill

Scientific Visualization

Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.

skill

Seaborn

Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.

skill

Shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model

skill

Pydeseq2

Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.

skill

Query Writing

For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations

skill

Pydeseq2

Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.

skill

Scientific Visualization

Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.

skill

Skill Information

Category:Skill
License:MIT
Version:1.0.0
Allowed Tools:Read,Grep,Bash(ls:*)
Last Updated:12/22/2025