Pipeline
by johnlindquist
Chain multiple operations together in pipelines. Use for multi-step workflows, combining research with analysis, and complex automated tasks.
Skill Details
Repository Files
1 file in this skill directory
name: pipeline description: Chain multiple operations together in pipelines. Use for multi-step workflows, combining research with analysis, and complex automated tasks.
Pipeline Orchestration
Chain multiple tools and operations together.
Basic Pipelines
Research → Summarize
# Research a topic then summarize
RESEARCH=$(gemini -m pro -o text -e "" "Research: [topic]. Be comprehensive.")
SUMMARY=$(echo "$RESEARCH" | gemini -m pro -o text -e "" "Summarize this research in 5 bullet points")
echo "$SUMMARY"
Code → Review → Fix
# Read code, get review, apply fixes
CODE=$(cat src/module.ts)
REVIEW=$(echo "$CODE" | gemini -m pro -o text -e "" "Review this code for issues")
FIXES=$(echo "$CODE\n\nReview:\n$REVIEW" | gemini -m pro -o text -e "" "Provide fixed code")
Multi-Agent Pipeline
# Get perspectives from multiple agents
QUESTION="Best approach for state management in React?"
CLAUDE=$(claude --print "$QUESTION" 2>/dev/null)
GEMINI=$(gemini -m pro -o text -e "" "$QUESTION")
SYNTHESIS=$(gemini -m pro -o text -e "" "Synthesize these perspectives:
Claude: $CLAUDE
Gemini: $GEMINI
Provide a unified recommendation.")
Pipeline Patterns
Transform Chain
#!/bin/bash
# transform.sh - Chain of transformations
INPUT=$1
# Step 1: Extract
EXTRACTED=$(echo "$INPUT" | gemini -m pro -o text -e "" "Extract key points")
# Step 2: Structure
STRUCTURED=$(echo "$EXTRACTED" | gemini -m pro -o text -e "" "Organize as JSON")
# Step 3: Validate
VALIDATED=$(echo "$STRUCTURED" | gemini -m pro -o text -e "" "Validate and fix any JSON issues")
echo "$VALIDATED"
Conditional Pipeline
#!/bin/bash
# conditional.sh - Branch based on analysis
INPUT=$1
# Analyze type
TYPE=$(echo "$INPUT" | gemini -m pro -o text -e "" "Is this a bug report, feature request, or question? Answer with one word.")
case $TYPE in
bug*)
gemini -m pro -o text -e "" "Analyze this bug report and suggest debugging steps: $INPUT"
;;
feature*)
gemini -m pro -o text -e "" "Break down this feature request into tasks: $INPUT"
;;
question*)
gemini -m pro -o text -e "" "Answer this question: $INPUT"
;;
esac
Parallel Pipeline
#!/bin/bash
# parallel.sh - Run analysis in parallel
INPUT=$1
# Run in parallel
echo "$INPUT" | gemini -m pro -o text -e "" "Technical analysis" > /tmp/technical.txt &
echo "$INPUT" | gemini -m pro -o text -e "" "Business analysis" > /tmp/business.txt &
echo "$INPUT" | gemini -m pro -o text -e "" "Risk analysis" > /tmp/risk.txt &
wait
# Combine results
gemini -m pro -o text -e "" "Combine these analyses:
Technical:
$(cat /tmp/technical.txt)
Business:
$(cat /tmp/business.txt)
Risk:
$(cat /tmp/risk.txt)
Provide integrated recommendation."
Common Pipelines
Code Review Pipeline
#!/bin/bash
# code-review.sh FILE
FILE=$1
CODE=$(cat "$FILE")
# Step 1: Static analysis
echo "=== Linting ===" > /tmp/review.txt
npx eslint "$FILE" 2>&1 >> /tmp/review.txt
# Step 2: Type check
echo "" >> /tmp/review.txt
echo "=== Type Check ===" >> /tmp/review.txt
npx tsc --noEmit "$FILE" 2>&1 >> /tmp/review.txt
# Step 3: AI review
echo "" >> /tmp/review.txt
echo "=== AI Review ===" >> /tmp/review.txt
gemini -m pro -o text -e "" "Review this code:
$CODE
Check for:
- Bugs
- Security issues
- Performance problems
- Best practices violations" >> /tmp/review.txt
cat /tmp/review.txt
Documentation Pipeline
#!/bin/bash
# document.sh FILE
FILE=$1
CODE=$(cat "$FILE")
# Generate docs
DOCS=$(gemini -m pro -o text -e "" "Generate documentation for:
$CODE
Include:
- Overview
- Function descriptions
- Parameter docs
- Examples")
# Generate README section
README=$(echo "$DOCS" | gemini -m pro -o text -e "" "Convert to README.md format")
# Generate inline comments
COMMENTED=$(gemini -m pro -o text -e "" "Add JSDoc comments to:
$CODE")
echo "=== Documentation ==="
echo "$DOCS"
echo ""
echo "=== Commented Code ==="
echo "$COMMENTED"
Research Pipeline
#!/bin/bash
# research.sh TOPIC
TOPIC=$1
# Step 1: Initial research
echo "Researching: $TOPIC"
INITIAL=$(gemini -m pro -o text -e "" "Research: $TOPIC. Focus on practical aspects.")
# Step 2: Find gaps
GAPS=$(echo "$INITIAL" | gemini -m pro -o text -e "" "What questions remain unanswered?")
# Step 3: Fill gaps
FOLLOWUP=$(echo "$GAPS" | gemini -m pro -o text -e "" "Answer these remaining questions about $TOPIC")
# Step 4: Synthesize
gemini -m pro -o text -e "" "Create comprehensive summary:
Initial Research:
$INITIAL
Follow-up:
$FOLLOWUP
Provide:
1. Key findings
2. Recommendations
3. Next steps"
Error Handling
With Retry
#!/bin/bash
# retry-pipeline.sh
retry() {
local n=1
local max=3
local delay=2
while true; do
"$@" && return 0
if [[ $n -lt $max ]]; then
((n++))
echo "Retry $n/$max in ${delay}s..."
sleep $delay
else
return 1
fi
done
}
# Use in pipeline
retry gemini -m pro -o text -e "" "Your prompt"
With Fallback
#!/bin/bash
# fallback-pipeline.sh
# Try Claude, fallback to Gemini
result=$(claude --print "Question" 2>/dev/null) || \
result=$(gemini -m pro -o text -e "" "Question")
echo "$result"
Best Practices
- Save intermediate results - Debug easier
- Add timeouts - Prevent hanging
- Handle errors - Check return codes
- Log progress - Track long pipelines
- Test incrementally - Verify each step
- Use temp files - For complex data
- Clean up - Remove temp files after
Related Skills
Dask
Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Polars
Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Dask
Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.
Anndata
Data structure for annotated matrices in single-cell analysis. Use when working with .h5ad files or integrating with the scverse ecosystem. This is the data format skill—for analysis workflows use scanpy; for probabilistic models use scvi-tools; for population-scale queries use cellxgene-census.
Matplotlib
Low-level plotting library for full customization. Use when you need fine-grained control over every plot element, creating novel plot types, or integrating with specific scientific workflows. Export to PNG/PDF/SVG for publication. For quick statistical plots use seaborn; for interactive plots use plotly; for publication-ready multi-panel figures with journal styling, use scientific-visualization.
Dashboard Design
USE THIS SKILL FIRST when user wants to create and design a dashboard, ESPECIALLY Vizro dashboards. This skill enforces a 3-step workflow (requirements, layout, visualization) that must be followed before implementation. For implementation and testing, use the dashboard-build skill after completing Steps 1-3.
Writing Effective Prompts
Structure Claude prompts for clarity and better results using roles, explicit instructions, context, positive framing, and strategic organization. Use when crafting prompts for complex tasks, long documents, tool workflows, or code generation.
Flowchart Creator
Create HTML flowcharts and process diagrams with decision trees, color-coded stages, arrows, and swimlanes. Use when users request flowcharts, process diagrams, workflow visualizations, or decision trees.
