Verde
by SteadfastAsArt
|
Skill Details
Repository Files
4 files in this skill directory
name: verde description: | Spatial data gridding and interpolation with a machine-learning style API. Process geographic and Cartesian point data onto regular grids. Use when Claude needs to: (1) Grid scattered spatial data onto regular grids, (2) Interpolate point data using splines, linear, or cubic methods, (3) Process geographic coordinates with projections, (4) Reduce large datasets using block averaging, (5) Remove polynomial trends from spatial data, (6) Cross-validate gridding parameters, (7) Create processing pipelines with Chain, (8) Grid vector data like GPS velocities.
Verde - Spatial Data Gridding
Quick Reference
import verde as vd
# Basic gridding
spline = vd.Spline()
spline.fit(coordinates, values) # coordinates = (lon, lat) tuple
grid = spline.grid(spacing=0.1) # Returns xarray Dataset
# Access result
elevation = grid.elevation.values
# Save output
grid.to_netcdf('output.nc')
Key Classes
| Class | Purpose |
|---|---|
Spline |
Bi-harmonic spline interpolation (smooth, good extrapolation) |
Linear |
Delaunay triangulation (fast, no extrapolation) |
Cubic |
Cubic interpolation (medium smoothness) |
Chain |
Pipeline of processing steps |
BlockReduce |
Decimate data to block means/medians |
Trend |
Polynomial trend fitting and removal |
Vector |
Grid 2-component vector data |
Essential Operations
Grid Scattered Data
coordinates = (longitude, latitude) # Tuple of 1D arrays
values = elevation # 1D array
spline = vd.Spline()
spline.fit(coordinates, values)
grid = spline.grid(spacing=0.1, data_names=['elevation'])
Project to Cartesian
import pyproj
projection = pyproj.Proj(proj='merc', lat_ts=data_lat.mean())
proj_coords = projection(longitude, latitude)
spline = vd.Spline()
spline.fit(proj_coords, values)
grid = spline.grid(spacing=1000) # 1000m spacing
Block Reduce Large Datasets
import numpy as np
reducer = vd.BlockReduce(reduction=np.median, spacing=0.1)
coords_reduced, values_reduced = reducer.filter(coordinates, values)
Remove Trend Before Gridding
trend = vd.Trend(degree=2) # Quadratic
trend.fit(coordinates, values)
residuals = values - trend.predict(coordinates)
# Grid residuals, then add trend back
Processing Pipeline
chain = vd.Chain([
('trend', vd.Trend(degree=1)),
('reduce', vd.BlockReduce(np.median, spacing=0.05)),
('spline', vd.Spline())
])
chain.fit(coordinates, values)
grid = chain.grid(spacing=0.01)
Cross-Validation
spline = vd.Spline()
scores = vd.cross_val_score(spline, coordinates, values, cv=5)
print(f"Mean R2: {scores.mean():.3f}")
Mask Far from Data
grid = spline.grid(spacing=0.1)
mask = vd.distance_mask(coordinates, maxdist=0.2, grid=grid)
grid_masked = grid.where(mask)
Grid Parameters
| Parameter | Description |
|---|---|
spacing |
Grid cell size (same units as coordinates) |
region |
(west, east, south, north) bounds |
shape |
(n_north, n_east) grid dimensions |
adjust |
'spacing' or 'region' - which to adjust for exact fit |
Gridder Comparison
| Gridder | Speed | Smoothness | Extrapolation |
|---|---|---|---|
Spline |
Medium | High | Good |
Linear |
Fast | Low | None |
Cubic |
Fast | Medium | None |
Common Issues
| Issue | Solution |
|---|---|
| Poor extrapolation | Use distance_mask() to mask far from data |
| Slow with large data | Use BlockReduce first |
| Regional trends | Remove with Trend before gridding |
| Wrong spacing | Check coordinate units (degrees vs meters) |
References
- Gridders - Available gridders and parameters
- Cross-Validation - Parameter tuning methods
Scripts
- scripts/grid_data.py - Grid scattered data to NetCDF
Related Skills
Attack Tree Construction
Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
Grafana Dashboards
Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Matplotlib
Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.
Scientific Visualization
Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.
Seaborn
Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.
Shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Query Writing
For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Scientific Visualization
Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.
