Network Meta Analysis
by matheus-rech
Teach network meta-analysis (NMA) for comparing multiple treatments simultaneously. Use when users need to compare more than two interventions, understand indirect comparisons, or create network plots and league tables.
Skill Details
Repository Files
1 file in this skill directory
name: network-meta-analysis description: Teach network meta-analysis (NMA) for comparing multiple treatments simultaneously. Use when users need to compare more than two interventions, understand indirect comparisons, or create network plots and league tables. license: Apache-2.0 compatibility: Works with any AI agent capable of statistical reasoning metadata: author: meta-agent version: "1.0.0" category: statistics domain: evidence-synthesis difficulty: advanced estimated-time: "25 minutes" prerequisites: - meta-analysis-fundamentals - heterogeneity-analysis
Network Meta-Analysis
This skill teaches network meta-analysis (NMA), also known as mixed treatment comparison (MTC), enabling comparison of multiple interventions simultaneously even when direct head-to-head trials don't exist.
Overview
Network meta-analysis extends traditional pairwise meta-analysis by combining direct and indirect evidence to compare multiple treatments. It's essential for clinical decision-making when choosing among several treatment options.
When to Use This Skill
Activate this skill when users:
- Need to compare more than 2 treatments
- Ask about indirect comparisons
- Mention "network meta-analysis" or "mixed treatment comparison"
- Want to rank treatments
- Need to create network plots or league tables
- Ask about transitivity or consistency assumptions
Core Concepts to Teach
1. What is Network Meta-Analysis?
Definition: A statistical method that combines direct and indirect evidence to compare multiple treatments within a single analysis.
Key Teaching Points:
- Direct evidence: A vs B from head-to-head trials
- Indirect evidence: A vs B inferred through common comparator C
- Network geometry: How treatments are connected through trials
Socratic Questions:
- "If we have trials comparing A vs C and B vs C, can we learn something about A vs B?"
- "What assumptions must hold for indirect comparisons to be valid?"
- "Why might indirect evidence differ from direct evidence?"
2. The Transitivity Assumption
Critical Concept: For indirect comparisons to be valid, studies must be similar enough that patients could have been enrolled in any of them.
Factors to Assess:
- Patient populations (age, severity, comorbidities)
- Intervention definitions (doses, durations)
- Outcome definitions and timing
- Study design and risk of bias
Teaching Framework:
Transitivity Check:
┌─────────────────────────────────────────┐
│ Could patients in A vs C trials have │
│ been enrolled in B vs C trials? │
│ │
│ YES → Transitivity likely holds │
│ NO → Indirect comparison may be │
│ biased (effect modification) │
└─────────────────────────────────────────┘
3. Network Geometry
Network Plot Elements:
- Nodes = treatments (size ∝ sample size)
- Edges = direct comparisons (thickness ∝ number of studies)
- Closed loops = allow consistency checks
Types of Networks:
| Geometry | Description | Implications |
|---|---|---|
| Star | All comparisons to one reference | No consistency checks possible |
| Connected | Multiple paths between treatments | Can assess consistency |
| Disconnected | Separate subnetworks | Cannot compare all treatments |
4. Statistical Models
Frequentist Approach (netmeta package):
library(netmeta)
# Create network meta-analysis
nma <- netmeta(
TE = effect_size,
seTE = standard_error,
treat1 = treatment1,
treat2 = treatment2,
studlab = study_id,
data = mydata,
sm = "OR", # Effect measure
random = TRUE, # Random effects
reference.group = "placebo"
)
# View results
summary(nma)
forest(nma)
netgraph(nma)
Bayesian Approach (gemtc/BUGSnet):
library(gemtc)
# Define network
network <- mtc.network(data.ab = arm_level_data)
# Run model
model <- mtc.model(network, type = "consistency")
results <- mtc.run(model, n.adapt = 5000, n.iter = 20000)
# Results
summary(results)
forest(relative.effect(results, t1 = "placebo"))
5. Consistency Assessment
What is Inconsistency?
- Disagreement between direct and indirect evidence
- Suggests violation of transitivity
- Must be assessed in closed loops
Methods to Assess:
- Loop-specific: Compare direct vs indirect in each loop
- Node-splitting: Separate direct and indirect for each comparison
- Global: Design-by-treatment interaction model
R Code for Node-Splitting:
# Node-splitting analysis
netsplit(nma)
# Interpretation:
# p < 0.05 suggests inconsistency for that comparison
6. Ranking Treatments
SUCRA (Surface Under Cumulative Ranking):
- Ranges from 0% to 100%
- Higher = more likely to be best
- Accounts for uncertainty
P-scores (frequentist equivalent):
# Get rankings
netrank(nma, small.values = "good")
# SUCRA-like plot
plot(netrank(nma))
Caution: Rankings have high uncertainty - always report with confidence intervals!
7. Presenting Results
League Table:
# Create league table
netleague(nma, digits = 2)
Forest Plot of All Comparisons:
# Forest plot vs reference
forest(nma, reference.group = "placebo")
Network Graph:
# Network visualization
netgraph(nma,
plastic = FALSE,
thickness = "number.of.studies",
multiarm = TRUE)
Assessment Questions
-
Basic: "What is the difference between direct and indirect evidence?"
- Correct: Direct comes from head-to-head trials; indirect is inferred through common comparators
-
Intermediate: "What is the transitivity assumption and why is it important?"
- Correct: Studies must be similar enough for indirect comparisons to be valid
-
Advanced: "How would you interpret a significant node-splitting test?"
- Guide: Suggests inconsistency between direct and indirect evidence for that comparison; investigate sources of heterogeneity
Common Misconceptions
-
"NMA always gives better estimates than pairwise MA"
- Reality: Only if transitivity holds; otherwise can introduce bias
-
"Treatment rankings are definitive"
- Reality: Rankings have high uncertainty; focus on effect estimates
-
"More connections = better network"
- Reality: Quality of evidence matters more than network complexity
Example Dialogue
User: "I have 15 trials comparing 5 antidepressants. Some are head-to-head, some vs placebo. How do I analyze this?"
Response Framework:
- Acknowledge NMA is appropriate
- Ask about outcome type and effect measure
- Discuss transitivity assessment
- Guide through network visualization
- Explain consistency checks
- Discuss ranking with appropriate caveats
References
- Cochrane Handbook Chapter on NMA
- Salanti G. Indirect and mixed-treatment comparison. Lancet 2012
- Rücker G, Schwarzer G. netmeta package documentation
- PRISMA-NMA extension statement
Adaptation Guidelines
Glass (the teaching agent) MUST adapt this content to the learner:
- Language Detection: Detect the user's language from their messages and respond naturally in that language
- Cultural Context: Adapt examples to local healthcare systems and research contexts when relevant
- Technical Terms: Maintain standard English terms (e.g., "network meta-analysis", "SUCRA", "transitivity") but explain them in the user's language
- Level Adaptation: Adjust complexity based on user's demonstrated knowledge level
- Socratic Method: Ask guiding questions in the detected language to promote deep understanding
- Local Examples: When possible, reference studies or guidelines familiar to the user's region
Example Adaptations:
- 🇧🇷 Portuguese: Use Brazilian clinical guideline examples (CONITEC evaluations)
- 🇪🇸 Spanish: Reference PAHO/OPS treatment recommendations
- 🇨🇳 Chinese: Include examples from Chinese NMA publications
Related Skills
meta-analysis-fundamentals- Basic concepts prerequisiteheterogeneity-analysis- Understanding between-study variationbayesian-meta-analysis- Alternative modeling approachgrade-assessment- Rating certainty of NMA evidence
Related Skills
Attack Tree Construction
Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
Grafana Dashboards
Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Matplotlib
Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.
Scientific Visualization
Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.
Seaborn
Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.
Shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Query Writing
For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Scientific Visualization
Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.
