Dtctl
by dynatrace-oss
Use dtctl CLI tool for querying observability data in Dynatrace via DQL (logs, metrics, traces, ...) and to manage Dynatrace platform resources (workflows, dashboards, notebooks, SLOs, settings, buckets, lookup tables).
Skill Details
Repository Files
1 file in this skill directory
name: dtctl description: Use dtctl CLI tool for querying observability data in Dynatrace via DQL (logs, metrics, traces, ...) and to manage Dynatrace platform resources (workflows, dashboards, notebooks, SLOs, settings, buckets, lookup tables). license: Apache-2.0
dtctl Command Reference
Syntax
dtctl <verb> <resource> [name/id] [flags]
Verbs: get, describe, create, edit, apply, delete, exec, query, logs, wait, history, restore, share/unshare
Key Resources: workflow (wf), dashboard (dash), notebook (nb), slo, bucket (bkt), lookup (lkup), settings, analyzer (az), copilot (cp)
Global Flags:
--context- Switch environment-o, --output- json|yaml|table|wide|csv|chart|sparkline|barchart--dry-run- Preview without executing--plain- Machine-readable output
Setup
# Configure context
dtctl config set-context prod --environment "https://abc.apps.dynatrace.com" --token-ref prod-token --safety-level readonly
dtctl config set-credentials prod-token --token "dt0s16.YOUR_TOKEN"
dtctl config use-context prod
# Safety levels: readonly | readwrite-mine | readwrite-all | dangerously-unrestricted
Common Commands
Workflows
dtctl get workflows --mine
dtctl edit workflow <id>
dtctl apply -f workflow.yaml --set env=prod
dtctl exec workflow <id> --wait --timeout 10m
dtctl logs wfe <execution-id> --follow
dtctl history workflow <id>
Dashboards/Notebooks
dtctl get dashboards --mine
dtctl edit dashboard <id>
dtctl share dashboard <id> --user user@example.com --access read-write
dtctl history dashboard <id>
dtctl restore dashboard <id> 3
DQL Queries
dtctl query 'fetch logs | filter status="ERROR" | limit 100'
dtctl query -f query.dql --set host=h-123 --set timerange=2h
dtctl query 'timeseries avg(dt.host.cpu.usage)' -o chart
dtctl wait query 'fetch spans | filter test_id == "test-123"' --for=count=1 --timeout 5m
Wait conditions: count=N, count-gte=N, count-gt=N, count-lte=N, count-lt=N, any, none
Template syntax in .dql files:
fetch logs
| filter host.name = "{{.host}}"
| filter timestamp > now() - {{.timerange | default "1h"}}
Lookup Tables
dtctl create lookup -f data.csv --path /lookups/grail/pm/errors --lookup-field code
dtctl get lookups
dtctl get lookup /lookups/grail/pm/errors -o csv > backup.csv
# Use in DQL
dtctl query "fetch logs | lookup [load '/lookups/grail/pm/errors'], lookupField:status_code"
Settings API
dtctl get settings-schemas | grep openpipeline
dtctl get settings --schema builtin:openpipeline.logs.pipelines
dtctl edit setting <object-id>
dtctl apply -f config.yaml --set env=prod
SLOs
dtctl get slos
dtctl describe slo <id>
dtctl exec slo <id> -o json
dtctl apply -f slo.yaml
Davis AI
dtctl get analyzers
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer --query "timeseries avg(dt.host.cpu.usage)" -o chart
dtctl exec copilot "What caused the CPU spike?"
dtctl exec copilot nl2dql "show error logs from last hour"
Template Variables
# In YAML files: {{.variable}} or {{.variable | default "value"}}
dtctl apply -f workflow.yaml --set environment=prod --set owner=team-a
Troubleshooting
dtctl auth whoami # Check auth
dtctl auth can-i create workflows # Check permissions
dtctl config set-credentials <context> --token "dt0s16.NEW_TOKEN"
dtctl --help # Command help
Name resolution: Use IDs instead of names if ambiguous (dtctl get dashboards to find ID)
Safety blocks: Adjust context safety level or switch context
Permissions: Check token scopes at https://github.com/dynatrace-oss/dtctl/blob/main/docs/TOKEN_SCOPES.md
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
