Spl To Apl
by NeverSight
Translates Splunk SPL queries to Axiom APL. Provides command mappings, function equivalents, and syntax transformations. Use when migrating from Splunk, converting SPL queries, or learning APL equivalents of SPL patterns.
Skill Details
Repository Files
6 files in this skill directory
name: spl-to-apl description: Translates Splunk SPL queries to Axiom APL. Provides command mappings, function equivalents, and syntax transformations. Use when migrating from Splunk, converting SPL queries, or learning APL equivalents of SPL patterns.
SPL to APL Translator
Expert translator from Splunk Processing Language (SPL) to Axiom Processing Language (APL).
Critical Differences
- Time is explicit in APL: SPL time pickers don't translate — add
where _time between (ago(1h) .. now()) - Structure: SPL
index=... | command→ APL['dataset'] | operator - Join is preview: limited to 50k rows, inner/innerunique/leftouter only
- cidrmatch args reversed: SPL
cidrmatch(cidr, ip)→ APLipv4_is_in_range(ip, cidr)
Core Command Mappings
| SPL | APL | Notes |
|---|---|---|
search index=... |
['dataset'] |
Dataset replaces index |
search field=value |
where field == "value" |
Explicit where |
where |
where |
Same |
stats |
summarize |
Different aggregation syntax |
eval |
extend |
Create/modify fields |
table / fields |
project |
Select columns |
fields - |
project-away |
Remove columns |
rename x as y |
project-rename y = x |
Rename |
sort / sort - |
order by ... asc/desc |
Sort |
head N |
take N |
Limit rows |
top N field |
summarize count() by field | top N by count_ |
Two-step |
dedup field |
summarize arg_max(_time, *) by field |
Keep latest |
rex |
parse or extract() |
Regex extraction |
join |
join |
Preview feature |
append |
union |
Combine datasets |
mvexpand |
mv-expand |
Expand arrays |
timechart span=X |
summarize ... by bin(_time, X) |
Manual binning |
rare N field |
summarize count() by field | order by count_ asc | take N |
Bottom N |
spath |
parse_json() or json['path'] |
JSON access |
transaction |
No direct equivalent | Use summarize + make_list |
Complete mappings: reference/command-mapping.md
Stats → Summarize
# SPL
| stats count by status
# APL
| summarize count() by status
Key function mappings
| SPL | APL |
|---|---|
count |
count() |
count(field) |
countif(isnotnull(field)) |
dc(field) |
dcount(field) |
avg/sum/min/max |
Same |
median(field) |
percentile(field, 50) |
perc95(field) |
percentile(field, 95) |
first/last |
arg_min/arg_max(_time, field) |
list(field) |
make_list(field) |
values(field) |
make_set(field) |
Conditional count pattern
# SPL
| stats count(eval(status>=500)) as errors by host
# APL
| summarize errors = countif(status >= 500) by host
Complete function list: reference/function-mapping.md
Eval → Extend
# SPL
| eval new_field = old_field * 2
# APL
| extend new_field = old_field * 2
Key function mappings
| SPL | APL | Notes |
|---|---|---|
if(c, t, f) |
iff(c, t, f) |
Double 'f' |
case(c1,v1,...) |
case(c1,v1,...,default) |
Requires default |
len(str) |
strlen(str) |
|
lower/upper |
tolower/toupper |
|
substr |
substring |
0-indexed in APL |
replace |
replace_string |
|
tonumber |
toint/tolong/toreal |
Explicit types |
match(s,r) |
s matches regex "r" |
Operator |
split(s, d) |
split(s, d) |
Same |
mvjoin(mv, d) |
strcat_array(arr, d) |
Join array |
mvcount(mv) |
array_length(arr) |
Array length |
Case statement pattern
# SPL
| eval level = case(
status >= 500, "error",
status >= 400, "warning",
1==1, "ok"
)
# APL
| extend level = case(
status >= 500, "error",
status >= 400, "warning",
"ok"
)
Note: SPL's 1==1 catch-all becomes implicit default in APL.
Rex → Parse/Extract
# SPL
| rex field=message "user=(?<username>\w+)"
# APL - parse with regex
| parse kind=regex message with @"user=(?P<username>\w+)"
# APL - extract function
| extend username = extract("user=(\\w+)", 1, message)
Simple pattern (non-regex)
# SPL
| rex field=uri "^/api/(?<version>v\d+)/(?<endpoint>\w+)"
# APL
| parse uri with "/api/" version "/" endpoint
Time Handling
SPL time pickers don't translate. Always add explicit time range:
# SPL (time picker: Last 24 hours)
index=logs
# APL
['logs'] | where _time between (ago(24h) .. now())
Timechart translation
# SPL
| timechart span=5m count by status
# APL
| summarize count() by bin(_time, 5m), status
Common Patterns
Error rate calculation
# SPL
| stats count(eval(status>=500)) as errors, count as total by host
| eval error_rate = errors/total*100
# APL
| summarize errors = countif(status >= 500), total = count() by host
| extend error_rate = toreal(errors) / total * 100
Subquery (subsearch)
# SPL
index=logs [search index=errors | fields user_id | format]
# APL
let error_users = ['errors'] | where _time between (ago(1h) .. now()) | distinct user_id;
['logs']
| where _time between (ago(1h) .. now())
| where user_id in (error_users)
Join datasets
# SPL
| join user_id [search index=users | fields user_id, name]
# APL
| join kind=inner (['users'] | project user_id, name) on user_id
Transaction-like grouping
# SPL
| transaction session_id maxspan=30m
# APL (no direct equivalent — reconstruct with summarize)
| summarize
start_time = min(_time),
end_time = max(_time),
events = make_list(pack("time", _time, "action", action)),
duration = max(_time) - min(_time)
by session_id
| where duration <= 30m
String Matching Performance
| SPL | APL | Speed |
|---|---|---|
field="value" |
field == "value" |
Fastest |
field="*value*" |
field contains "value" |
Moderate |
field="value*" |
field startswith "value" |
Fast |
match(field, regex) |
field matches regex "..." |
Slowest |
Prefer has over contains (word-boundary matching is faster). Use _cs variants for case-sensitive (faster).
Reference
reference/command-mapping.md— complete command listreference/function-mapping.md— complete function listreference/examples.md— full query translation examples- APL docs: https://axiom.co/docs/apl/introduction
Related Skills
Attack Tree Construction
Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
Grafana Dashboards
Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Matplotlib
Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.
Scientific Visualization
Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.
Seaborn
Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.
Shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Query Writing
For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Scientific Visualization
Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.
