Dhis2
by BLSQ
Work with DHIS2 health information systems - extract data, query metadata, run validations, manage users, or check system info. Use this skill for ANY DHIS2-related request. It routes to specialized sub-skills and provides centralized client setup and API patterns.
Skill Details
Repository Files
1 file in this skill directory
name: dhis2 description: Work with DHIS2 health information systems - extract data, query metadata, run validations, manage users, or check system info. Use this skill for ANY DHIS2-related request. It routes to specialized sub-skills and provides centralized client setup and API patterns.
DHIS2
Main entry point for all DHIS2 interactions. This skill provides:
- Client setup - How to initialize the DHIS2 client
- API basics - How to use
dhis.api.get()and other methods - Routing - Which sub-skill to use for specific tasks
- Query optimization - Reference to
dhis2-query-optimizationfor large queries
Client Setup (Centralized)
All DHIS2 code uses this setup. Sub-skills assume dhis is already initialized.
from openhexa.sdk import workspace
from openhexa.toolbox.dhis2 import DHIS2
# Get connection identifier from workspace
dhis2_connection = workspace.dhis2_connection("connection_identifier")
# Initialize client with caching (recommended)
dhis = DHIS2(dhis2_connection, cache_dir=f"{workspace.files_path}/.cache")
Find Available Connections
from openhexa.sdk import workspace
# List all connections to find DHIS2 connection identifiers
for conn in workspace.list_connections():
print(f"{conn.identifier}: {conn.type}")
API Basics
The dhis client provides several interfaces:
Toolbox Methods (Preferred)
High-level methods that handle pagination and return DataFrames:
# Analytics - aggregated data
df = dhis.analytics.get(
data_elements=["fbfJHSPpUQD"],
org_units=["ImspTQPwCqd"],
periods=["202401", "202402"]
)
# Data values - raw submitted data
df = dhis.data_value_sets.get(
data_sets=["BfMAe6Itzgt"],
org_units=["ImspTQPwCqd"],
periods=["202401"]
)
# Metadata
df_de = dhis.meta.data_elements()
df_ind = dhis.meta.indicators()
df_ou = dhis.meta.organisation_units()
df_ds = dhis.meta.datasets()
# Tracker events
df = dhis.tracker.extract_event_data_values(
programs=["IpHINAT79UW"],
org_units=["ImspTQPwCqd"],
start_date="2024-01-01",
end_date="2024-12-31"
)
Raw API Access
For endpoints without toolbox methods, use dhis.api.get():
# GET request
response = dhis.api.get("endpoint", params={"key": "value"})
# Examples
system_info = dhis.api.get("system/info")
users = dhis.api.get("users", params={"fields": "id,name,email", "paging": "false"})
validation = dhis.api.get("validationRules", params={"fields": "*"})
# With path parameters
user = dhis.api.get(f"users/{user_id}", params={"fields": "id,name,organisationUnits"})
# POST request
response = dhis.api.post("endpoint", data=payload)
Response Handling
# API responses are dictionaries
response = dhis.api.get("dataElements", params={"paging": "false"})
data_elements = response.get("dataElements", [])
# Pagination (when paging=true or default)
response = dhis.api.get("dataElements", params={"pageSize": 50, "page": 1})
pager = response.get("pager", {})
total = pager.get("total")
page_count = pager.get("pageCount")
Routing Decision Tree
| User Request | Route To | Toolbox Method |
|---|---|---|
| Aggregated/indicator values | dhis2-analytics |
dhis.analytics.get() |
| Raw submitted values | dhis2-data-values |
dhis.data_value_sets.get() |
| Tracker/individual data | dhis2-tracker |
dhis.tracker.* / dhis.api.get() |
| Validation rules/violations | dhis2-validation |
dhis.api.get() |
| User info/permissions | dhis2-users |
dhis.api.get() |
| System version/config | dhis2-system-info |
dhis.api.get() |
| Data element metadata | dhis2-data-elements |
dhis.meta.data_elements() |
| Indicator metadata | dhis2-indicators |
dhis.meta.indicators() |
| Org unit hierarchy | dhis2-organisation-units |
dhis.meta.organisation_units() |
| Dataset metadata | dhis2-datasets |
dhis.meta.datasets() |
| Saved visualizations | dhis2-visualizations |
dhis.api.get() |
Query Optimization
IMPORTANT: For large queries, see dhis2-query-optimization skill.
Large queries can fail due to:
- URL too long - Max ~1900 characters
- Server timeout - Too much data requested
- Memory issues - Response too large
Common triggers:
children=Truewith high-level org unit (country)- Many periods (e.g., 5 years monthly = 60 periods)
- Many data elements/indicators (>20)
The toolbox handles chunking for explicit lists, but NOT for:
children=True/includeDescendants=True- Very large explicit lists
Always estimate query complexity before executing large queries.
Quick Reference: Data Types
| Aspect | Analytics | Data Values | Tracker |
|---|---|---|---|
| Data type | Aggregated | Raw submitted | Individual-level |
| API endpoint | /api/analytics |
/api/dataValueSets |
/api/tracker/* |
| Indicators | Yes | No | No |
| Aggregation | Server-side | None | None |
| Use for | Reports, trends | Audits, raw exports | Case management |
Ambiguous Request Handling
"Extract data from DHIS2"
Ask: "What type of data?"
- Aggregated/calculated →
dhis2-analytics - Raw as submitted →
dhis2-data-values - Individual/patient →
dhis2-tracker
"Get DHIS2 metadata"
Ask: "Which metadata?"
- Data elements →
dhis2-data-elements - Indicators →
dhis2-indicators - Org units →
dhis2-organisation-units - Datasets →
dhis2-datasets
Multi-Skill Scenarios
Complete Data Export
dhis2-organisation-units- Get hierarchydhis2-data-elements- Get metadatadhis2-analyticsordhis2-data-values- Get datadhis2-query-optimization- If large query
Data Quality Analysis
dhis2-validation- Get violationsdhis2-organisation-units- Add org unit namesdhis2-data-elements- Add data element names
Version Compatibility
| Feature | Min Version |
|---|---|
| New Tracker API | 2.38 |
| Enrollment analytics | 2.39 |
| Continuous analytics | 2.40 |
Check with dhis2-system-info before using version-specific features.
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
