Ai Report Export
by skyasu2
Automated 2-AI verification result documentation and export workflow. Triggers when user requests AI verification report export, documentation of findings, or saving cross-validation results. Use after completing Codex/Gemini analysis.
Skill Details
Repository Files
1 file in this skill directory
name: ai-report-export description: Automated 2-AI verification result documentation and export workflow. Triggers when user requests AI verification report export, documentation of findings, or saving cross-validation results. Use after completing Codex/Gemini analysis. version: v2.0.0 user-invocable: true allowed-tools: Bash, Read, Write
AI Verification Report Export
Target Token Efficiency: 78% (450 tokens → 99 tokens)
Purpose
Automated 2-AI verification result documentation without manual formatting or file organization.
Trigger Keywords
- "export AI report"
- "document findings"
- "save verification results"
- "AI 검증 결과"
- "2-AI 결과"
Context
- Project: OpenManager VIBE v5.85.0
- AI Tools: Codex, Gemini (2-AI cross-verification)
- Output Location: logs/ai-decisions/
- Note: Qwen 제거됨 (2026-01-07) - 평균 201초, 실패율 13.3%
Workflow
1. Identify AI Outputs
Required Information:
- Codex analysis (실무 검증)
- Gemini review (아키텍처 검증)
- Task/feature being verified
- Verification timestamp
Sources:
/tmp/codex.txt (from codex-wrapper.sh)
/tmp/gemini.txt (from gemini-wrapper.sh)
2. Parse AI Responses
Extract Key Data:
- Each AI's score (x/10)
- Key findings (3-5 points each)
- Consensus points (agreements)
- Divergent points (disagreements)
- Recommended actions
Automated Parsing:
# Parse scores from AI outputs
CODEX_SCORE=$(awk '/score:|점수:/ {print $NF}' /tmp/codex.txt 2>/dev/null | grep -oE '[0-9]+\.[0-9]+|[0-9]+' | head -1)
GEMINI_SCORE=$(awk '/score:|점수:/ {print $NF}' /tmp/gemini.txt 2>/dev/null | grep -oE '[0-9]+\.[0-9]+|[0-9]+' | head -1)
# Calculate average score (2-AI)
if [ -n "$CODEX_SCORE" ] && [ -n "$GEMINI_SCORE" ]; then
AVERAGE_SCORE=$(echo "scale=1; ($CODEX_SCORE + $GEMINI_SCORE) / 2" | bc 2>/dev/null || echo "0")
else
AVERAGE_SCORE="N/A"
echo "⚠️ WARNING: Unable to parse all AI scores"
fi
Status Determination:
# Threshold-based approval logic
if [ "$AVERAGE_SCORE" != "N/A" ]; then
if (( $(echo "$AVERAGE_SCORE >= 9.0" | bc -l) )); then
STATUS="✅ APPROVED"
elif (( $(echo "$AVERAGE_SCORE >= 8.0" | bc -l) )); then
STATUS="⚠️ CONDITIONALLY APPROVED"
elif (( $(echo "$AVERAGE_SCORE >= 7.0" | bc -l) )); then
STATUS="🔄 NEEDS REVISION"
else
STATUS="❌ REJECTED"
fi
else
STATUS="⚠️ INCOMPLETE"
fi
Template Structure:
# [Task Name] - 2-AI Verification
**Date**: YYYY-MM-DD HH:mm KST
**Status**: [APPROVED / CONDITIONALLY APPROVED / REJECTED]
## Scores
- Codex (실무): X.X/10
- Gemini (아키텍처): X.X/10
- **Average**: X.X/10
## Key Findings
### Codex (실무 검증)
- Finding 1
- Finding 2
- Finding 3
### Gemini (아키텍처 검증)
- Finding 1
- Finding 2
- Finding 3
## Recommended Actions
1. Priority 1: [Action]
2. Priority 2: [Action]
3. Generate Report File
Filename Convention:
logs/ai-decisions/YYYY-MM-DD-{task-slug}.md
4. Validation
# Check file existence before parsing
MISSING=""
[ ! -f /tmp/codex.txt ] && MISSING="${MISSING}codex "
[ ! -f /tmp/gemini.txt ] && MISSING="${MISSING}gemini "
if [ -n "$MISSING" ]; then
echo "⚠️ WARNING: Missing AI outputs: $MISSING"
exit 1
fi
5. Report Summary
📝 AI Verification Report Exported
📊 Summary:
├─ Task: [Task Name]
├─ Average Score: X.X/10
├─ Status: [APPROVED / CONDITIONAL / REJECTED]
└─ File: logs/ai-decisions/YYYY-MM-DD-{task-slug}.md
✅ Next Steps:
- Review consensus points
- Implement recommended actions
Success Criteria
- Report generated: < 2 min
- Both AI outputs included: 100%
- Markdown formatting valid: ✅
- Filename convention followed: ✅
Changelog
- 2026-01-10: v2.0.0 - 2-AI 시스템으로 전환 (Qwen 제거)
- 2025-11-04: v1.1.0 - Initial implementation
Related Skills
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Dask
Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Scientific Schematics
Create publication-quality scientific diagrams using Nano Banana Pro AI with smart iterative refinement. Uses Gemini 3 Pro for quality review. Only regenerates if quality is below threshold for your document type. Specialized in neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.
Mermaid Diagrams
Comprehensive guide for creating software diagrams using Mermaid syntax. Use when users need to create, visualize, or document software through diagrams including class diagrams (domain modeling, object-oriented design), sequence diagrams (application flows, API interactions, code execution), flowcharts (processes, algorithms, user journeys), entity relationship diagrams (database schemas), C4 architecture diagrams (system context, containers, components), state diagrams, git graphs, pie charts,
Polars
Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows.
Diagram Generation
Mermaid diagram generation for architecture visualization, data flow diagrams, and component relationships. Use for documentation, PR descriptions, and architectural analysis.
Matlab
MATLAB and GNU Octave numerical computing for matrix operations, data analysis, visualization, and scientific computing. Use when writing MATLAB/Octave scripts for linear algebra, signal processing, image processing, differential equations, optimization, statistics, or creating scientific visualizations. Also use when the user needs help with MATLAB syntax, functions, or wants to convert between MATLAB and Python code. Scripts can be executed with MATLAB or the open-source GNU Octave interpreter
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
