Nixtla Experiment Architect
by intent-solutions-io
Scaffolds production-ready forecasting experiments with Nixtla libraries. Creates configuration files, experiment harnesses, multi-model comparisons, and cross-validation workflows for StatsForecast, MLForecast, and TimeGPT. Activates when user needs experiment setup, forecasting pipeline creation, model benchmarking, or multi-model comparison framework.
Skill Details
Repository Files
18 files in this skill directory
name: nixtla-experiment-architect description: Scaffolds production-ready forecasting experiments with Nixtla libraries. Creates configuration files, experiment harnesses, multi-model comparisons, and cross-validation workflows for StatsForecast, MLForecast, and TimeGPT. Activates when user needs experiment setup, forecasting pipeline creation, model benchmarking, or multi-model comparison framework. allowed-tools: "Read,Write,Glob,Grep,Edit" version: "1.0.0" license: MIT
Nixtla Experiment Architect
Design and scaffold complete forecasting experiments using Nixtla's libraries.
Overview
This skill creates production-ready experiment harnesses:
- Configuration management: YAML-based experiment config
- Multi-model comparison: StatsForecast + MLForecast + TimeGPT
- Cross-validation: Rolling-origin or expanding-window
- Metrics evaluation: SMAPE, MASE, MAE, RMSE
Prerequisites
Required:
- Python 3.8+
statsforecast,utilsforecast
Optional:
mlforecast: For ML modelsnixtla: For TimeGPTNIXTLA_API_KEY: TimeGPT access
Installation:
pip install statsforecast mlforecast nixtla utilsforecast pyyaml
Instructions
Step 1: Gather Requirements
Collect experiment parameters:
- Data source path
- Target column name
- Forecast horizon (e.g., 14 days)
- Frequency (D, H, W, M)
- Unique ID column (optional)
Step 2: Generate Configuration
python {baseDir}/scripts/generate_config.py \
--data data/sales.csv \
--target sales \
--horizon 14 \
--freq D \
--output forecasting/config.yml
Step 3: Scaffold Experiment
python {baseDir}/scripts/scaffold_experiment.py \
--config forecasting/config.yml \
--output forecasting/experiments.py
Step 4: Run Experiment
python forecasting/experiments.py
Step 5: Review Results
cat forecasting/results/metrics_summary.csv
Output
- forecasting/config.yml: Experiment configuration
- forecasting/experiments.py: Runnable experiment harness
- forecasting/results/: Metrics and forecasts (after running)
Error Handling
-
Error:
Data file not foundSolution: Verify data source path in config -
Error:
Column not foundSolution: Check column names match your data -
Error:
Missing required packageSolution: Install missing dependencies with pip -
Error:
Cross-validation failedSolution: Ensure enough data for n_windows
Examples
Example 1: Daily Sales Forecast
python {baseDir}/scripts/generate_config.py \
--data data/sales.csv \
--target revenue \
--horizon 30 \
--freq D \
--id_col store_id
Output config.yml:
data:
source: data/sales.csv
target: revenue
unique_id: store_id
forecasting:
horizon: 30
freq: D
models:
- SeasonalNaive
- AutoETS
- AutoARIMA
Example 2: Hourly Energy Forecast
python {baseDir}/scripts/generate_config.py \
--data data/energy.csv \
--target consumption \
--horizon 24 \
--freq H
Resources
- Scripts:
{baseDir}/scripts/ - Templates:
{baseDir}/assets/templates/ - Nixtla Docs: https://nixtla.github.io/
Related Skills:
nixtla-timegpt-lab: Core forecasting guidancenixtla-schema-mapper: Data transformationnixtla-prod-pipeline-generator: Production deployment
Related Skills
Dask
Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Polars
Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Dask
Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.
Anndata
Data structure for annotated matrices in single-cell analysis. Use when working with .h5ad files or integrating with the scverse ecosystem. This is the data format skill—for analysis workflows use scanpy; for probabilistic models use scvi-tools; for population-scale queries use cellxgene-census.
Matplotlib
Low-level plotting library for full customization. Use when you need fine-grained control over every plot element, creating novel plot types, or integrating with specific scientific workflows. Export to PNG/PDF/SVG for publication. For quick statistical plots use seaborn; for interactive plots use plotly; for publication-ready multi-panel figures with journal styling, use scientific-visualization.
Dashboard Design
USE THIS SKILL FIRST when user wants to create and design a dashboard, ESPECIALLY Vizro dashboards. This skill enforces a 3-step workflow (requirements, layout, visualization) that must be followed before implementation. For implementation and testing, use the dashboard-build skill after completing Steps 1-3.
Writing Effective Prompts
Structure Claude prompts for clarity and better results using roles, explicit instructions, context, positive framing, and strategic organization. Use when crafting prompts for complex tasks, long documents, tool workflows, or code generation.
Flowchart Creator
Create HTML flowcharts and process diagrams with decision trees, color-coded stages, arrows, and swimlanes. Use when users request flowcharts, process diagrams, workflow visualizations, or decision trees.
