Dhis2 Analytics
by BLSQ
Query aggregated analytics data from DHIS2. Use for calculated/aggregated values, indicator values, or cross-dimensional analysis. Routed via dhis2 skill for general DHIS2 requests.
Skill Details
Repository Files
1 file in this skill directory
name: dhis2-analytics description: Query aggregated analytics data from DHIS2. Use for calculated/aggregated values, indicator values, or cross-dimensional analysis. Routed via dhis2 skill for general DHIS2 requests.
DHIS2 Analytics
Query aggregated analytics data from DHIS2.
Prerequisites:
- Client setup from
dhis2skill (assumesdhisis initialized) - For large queries, see
dhis2-query-optimizationskill
Get Analytics Data
# Basic query with data elements
data = dhis.analytics.get(
data_elements=["fbfJHSPpUQD", "cYeuwXTCPkU"],
org_units=["ImspTQPwCqd"],
periods=["202401", "202402", "202403"]
)
# With indicators
data = dhis.analytics.get(
indicators=["ReUHfIn0pTQ"],
org_units=["ImspTQPwCqd"],
periods=["2024"]
)
# By data element group
data = dhis.analytics.get(
data_element_groups=["oDkJh5Ddh7d"],
org_units=["ImspTQPwCqd"],
periods=["2024"]
)
# By indicator group
data = dhis.analytics.get(
indicator_groups=["oehv9EO3vP7"],
org_units=["ImspTQPwCqd"],
periods=["2024"]
)
# By org unit group
data = dhis.analytics.get(
data_elements=["fbfJHSPpUQD"],
org_unit_groups=["CXw2yu5fodb"],
periods=["2024"]
)
# By org unit level
data = dhis.analytics.get(
data_elements=["fbfJHSPpUQD"],
org_unit_levels=[2, 3], # Districts and facilities
org_units=["ImspTQPwCqd"], # Parent org unit
periods=["2024"]
)
# Include category option combos
data = dhis.analytics.get(
data_elements=["fbfJHSPpUQD"],
org_units=["ImspTQPwCqd"],
periods=["2024"],
include_cocs=True
)
Custom API Endpoint (Alternative)
For advanced queries not covered by toolbox:
def get_analytics_raw(dhis, dimension: list, filter_dims: list = None) -> dict:
"""Query analytics with custom dimensions."""
params = {
"dimension": dimension,
"skipMeta": False,
"skipData": False
}
if filter_dims:
params["filter"] = filter_dims
return dhis.api.get("analytics", params=params)
# Example: custom dimension query
response = get_analytics_raw(
dhis,
dimension=[
"dx:fbfJHSPpUQD;cYeuwXTCPkU", # Data elements
"pe:LAST_12_MONTHS", # Relative period
"ou:LEVEL-2;ImspTQPwCqd" # Level 2 under parent
]
)
Get Analytics Table Format
def get_analytics_table(dhis, params: dict) -> pd.DataFrame:
"""Get analytics in table format."""
response = dhis.api.get(
"analytics",
params={**params, "skipMeta": False}
)
headers = [h["name"] for h in response.get("headers", [])]
rows = response.get("rows", [])
df = pd.DataFrame(rows, columns=headers)
# Add metadata for ID to name mapping
metadata = response.get("metaData", {}).get("items", {})
return df, metadata
Dimension Syntax
| Dimension | Syntax | Example |
|---|---|---|
| Data (dx) | dx:id1;id2 |
dx:fbfJHSPpUQD;ReUHfIn0pTQ |
| Period (pe) | pe:period1;period2 |
pe:202401;202402 |
| Org Unit (ou) | ou:id1;id2 |
ou:ImspTQPwCqd |
| Org Unit Level | ou:LEVEL-n |
ou:LEVEL-2 |
| Org Unit Group | ou:OU_GROUP-id |
ou:OU_GROUP-CXw2yu5fodb |
Relative Periods
| Period | Description |
|---|---|
THIS_MONTH |
Current month |
LAST_MONTH |
Previous month |
LAST_3_MONTHS |
Last 3 months |
LAST_6_MONTHS |
Last 6 months |
LAST_12_MONTHS |
Last 12 months |
THIS_QUARTER |
Current quarter |
LAST_QUARTER |
Previous quarter |
LAST_4_QUARTERS |
Last 4 quarters |
THIS_YEAR |
Current year |
LAST_YEAR |
Previous year |
LAST_5_YEARS |
Last 5 years |
Analytics vs Data Values
| Aspect | Analytics | Data Values |
|---|---|---|
| Aggregation | Yes (server-side) | No (raw) |
| Indicators | Yes | No |
| Performance | Better for aggregated | Better for raw |
| Disaggregation | Optional | Always included |
| Calculated values | Yes | No |
Enriching Results
# Add names to analytics results
df = data # DataFrame from analytics.get()
# Add data element/indicator names
df = dhis.meta.add_dx_name_column(df, "dx")
# Add org unit names
df = dhis.meta.add_org_unit_name_column(df, "ou")
# Add org unit hierarchy
df = dhis.meta.add_org_unit_parent_columns(df, "ou")
Advanced Query Examples
Aggregate by Org Unit Level
def get_aggregated_by_level(dhis, data_elements: list, level: int, periods: list) -> pd.DataFrame:
"""Get data aggregated at specific org unit level."""
response = dhis.api.get(
"analytics",
params={
"dimension": [
f"dx:{';'.join(data_elements)}",
f"ou:LEVEL-{level}",
f"pe:{';'.join(periods)}"
],
"aggregationType": "SUM"
}
)
headers = [h["name"] for h in response.get("headers", [])]
return pd.DataFrame(response.get("rows", []), columns=headers)
Time Series for Single Indicator
def get_time_series(dhis, indicator_id: str, org_unit_id: str, periods: int = 12) -> pd.DataFrame:
"""Get time series for an indicator."""
return dhis.analytics.get(
indicators=[indicator_id],
org_units=[org_unit_id],
periods=[f"LAST_{periods}_MONTHS"]
)
Performance Tips
- Use relative periods - More efficient than listing individual periods
- Aggregate at higher levels - Query LEVEL-2 instead of all facilities
- Limit data dimensions - Don't query all data elements
- Enable caching - Results are cached based on query
- Use skipMeta=True - If you don't need metadata
Large Query Handling
⚠️ For large queries, use dhis2-query-optimization skill.
Queries can fail when:
- Using
children=Truewith country-level org unit - Requesting many periods (>12 months)
- Requesting many data elements (>20)
The optimization skill provides:
- Complexity estimation
- Chunking strategies
- Timeout handling
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
