Research Orchestrator

by Tristan578

workflow

Coordinates academic research workflow - delegates analysis, correlation, writing, and review tasks to specialist agents

Skill Details

Repository Files

1 file in this skill directory


name: research-orchestrator description: Coordinates academic research workflow - delegates analysis, correlation, writing, and review tasks to specialist agents allowed-tools: [Skill, Task, Read, Write, TodoWrite]

Research Orchestrator

You manage research projects from start to finish. You delegate work to specialist agents using the Task and Skill tools, and ensure quality at each stage.

Your Workflow

Stage 1: Extract Data from Papers

Who does it: Use Skill tool to invoke academic-researcher Input: PDF files in papers/ folder Output: results/parsed_papers.json (structured data) Your job: Verify the JSON file has data for all papers

How to delegate:

Use Skill tool with command: "academic-researcher"
The skill will read PDFs and create parsed_papers.json

Stage 2: Calculate Statistics

Who does it: Use Skill tool to invoke academic-researcher again Input: results/parsed_papers.json Output: results/correlation_analysis.json Your job: Verify correlation coefficients are valid (between -1 and 1)

How to delegate:

Use Skill tool with command: "academic-researcher"
Ask it to calculate correlations from parsed data

Stage 3: Write Article Draft

Who does it: Use Skill tool to invoke technical-copywriter Input: results/correlation_analysis.json Output: results/draft_article.md Your job: Verify article has proper structure and citations

How to delegate:

Use Skill tool with command: "technical-copywriter"
The skill will read analysis results and write article

Stage 4: Review Quality

Who does it: Use Skill tool to invoke research-antagonist Input: results/draft_article.md Output: results/review_feedback.json Your job: Check if approved or needs revision

How to delegate:

Use Skill tool with command: "research-antagonist"
The skill will review the draft and provide feedback

If revision needed, go back to Stage 3 and invoke technical-copywriter again. If approved, workflow complete.

When to Escalate to Human

Stop and ask for human help if:

  • Any stage fails 3 times in a row
  • Data extraction returns empty results
  • Statistical calculations produce impossible values (r > 1, p > 1)
  • Review finds critical errors that can't be auto-fixed

Success Criteria

Project complete when:

  • All JSON files exist and have valid data
  • Draft article is complete with citations
  • Antagonist status = "APPROVED"

Related Skills

Dask

Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.

workflowdata

Scikit Survival

Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.

workflowtooldata

Polars

Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows.

workflowapidata

Scikit Survival

Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.

workflowtooldata

Dask

Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.

codeworkflow

Anndata

Data structure for annotated matrices in single-cell analysis. Use when working with .h5ad files or integrating with the scverse ecosystem. This is the data format skill—for analysis workflows use scanpy; for probabilistic models use scvi-tools; for population-scale queries use cellxgene-census.

workflowtooldata

Matplotlib

Low-level plotting library for full customization. Use when you need fine-grained control over every plot element, creating novel plot types, or integrating with specific scientific workflows. Export to PNG/PDF/SVG for publication. For quick statistical plots use seaborn; for interactive plots use plotly; for publication-ready multi-panel figures with journal styling, use scientific-visualization.

workflow

Dashboard Design

USE THIS SKILL FIRST when user wants to create and design a dashboard, ESPECIALLY Vizro dashboards. This skill enforces a 3-step workflow (requirements, layout, visualization) that must be followed before implementation. For implementation and testing, use the dashboard-build skill after completing Steps 1-3.

designtestingworkflow

Writing Effective Prompts

Structure Claude prompts for clarity and better results using roles, explicit instructions, context, positive framing, and strategic organization. Use when crafting prompts for complex tasks, long documents, tool workflows, or code generation.

codedocumentworkflow

Flowchart Creator

Create HTML flowcharts and process diagrams with decision trees, color-coded stages, arrows, and swimlanes. Use when users request flowcharts, process diagrams, workflow visualizations, or decision trees.

artcodeworkflow

Skill Information

Category:Enterprise
Allowed Tools:[Skill, Task, Read, Write, TodoWrite]
Last Updated:10/22/2025