Ecir Report Analyzer
by CBoser
Analyze completed ECIR (Engineering Change Impact Report) Excel files to extract insights, identify trends, and detect patterns across single or multiple reports. Use when the user asks to analyze ECIR reports, find trends in ECIRs, compare multiple ECIRs, identify cost variance patterns, or generate insights from completed ECIR Excel files. This skill works with the output files from the ECIR Advanced Tool.
Skill Details
Repository Files
4 files in this skill directory
name: ecir-report-analyzer description: Analyze completed ECIR (Engineering Change Impact Report) Excel files to extract insights, identify trends, and detect patterns across single or multiple reports. Use when the user asks to analyze ECIR reports, find trends in ECIRs, compare multiple ECIRs, identify cost variance patterns, or generate insights from completed ECIR Excel files. This skill works with the output files from the ECIR Advanced Tool.
ECIR Report Analyzer
Analyze completed ECIR Excel reports to extract insights, identify trends, and detect patterns.
When to Use This Skill
Use this skill when the user:
- Asks to analyze an ECIR report
- Wants to understand cost impacts from a completed ECIR
- Needs to compare multiple ECIRs to find trends
- Requests insights or patterns across engineering changes
- Wants to identify high-variance items or categories
- Asks for a summary of change types in an ECIR
Prerequisites
ECIR Excel files must be generated by the ECIR Advanced Tool and contain these sheets:
- Header (metadata and summary metrics)
- Summary (category-level rollup)
- Detail_All (item-by-item comparison)
- Detail_Changes (filtered changed items only)
Core Capabilities
1. Single ECIR Analysis
Analyze one ECIR report to extract:
- Header metrics: Cost impacts, change reasons, affected models
- Status distribution: Breakdown of Added/Deleted/Modified items
- Category impacts: Which categories drove cost variance
- Outliers: Items with unusually large variances
- Insights: Human-readable key findings
Script: scripts/analyze_ecir.py
2. Multi-ECIR Trend Analysis
Compare multiple ECIRs to identify:
- Cost variance trends: Mean/median/min/max across all reports
- Change reason frequency: Most common reasons for changes
- Aggregate status distribution: Overall pattern of change types
- Supplier changes: Track vendor transitions
- High-variance ECIRs: Reports exceeding variance thresholds
- Common patterns: Recurring change types and impacts
Script: scripts/compare_ecirs.py
Usage Instructions
Analyzing a Single ECIR
When user provides an ECIR Excel file:
-
Use the
bash_toolto runanalyze_ecir.py:python scripts/analyze_ecir.py path/to/ecir.xlsx -
For JSON output (for programmatic use):
python scripts/analyze_ecir.py path/to/ecir.xlsx --format json -
To save analysis to file:
python scripts/analyze_ecir.py path/to/ecir.xlsx --output analysis.txt -
Parse the output and present key insights to the user in natural language
Comparing Multiple ECIRs
When user wants trend analysis:
-
Use the
bash_toolto runcompare_ecirs.py:python scripts/compare_ecirs.py ecir1.xlsx ecir2.xlsx ecir3.xlsx -
For JSON output:
python scripts/compare_ecirs.py *.xlsx --format json -
To save comparison:
python scripts/compare_ecirs.py *.xlsx --output trends.txt -
Summarize trends and patterns for the user
Working with User-Uploaded ECIRs
When user uploads ECIR files:
- The files will be in
/mnt/user-data/uploads/ - Use
viewtool to list available files - Run analysis scripts on the uploaded files
- Present results in user-friendly format
Output Interpretation
Text Format Output
The text format provides a human-readable report with sections:
- Insights: Key findings with emoji indicators
- Cost Summary: Before/after costs and variances
- Change Distribution: Breakdown by status type
- Top Cost Impacts: Categories with largest variances
JSON Format Output
Use JSON format when you need to:
- Programmatically process results
- Extract specific metrics
- Combine with other data
- Generate custom visualizations
Access metrics via:
import json
data = json.loads(result)
variance = data['header_metrics']['direct_variance_dollars']
Analysis Reference
For detailed explanation of metrics and insights, read references/analysis_guide.md which covers:
- All metric definitions
- Insight trigger conditions
- Interpretation guidelines
- Warning signs vs healthy patterns
- Example workflows
Read this reference when:
- User asks what a specific metric means
- Need to explain an insight
- Want to understand threshold values
- Planning analysis workflows
Common Tasks
Task: "What are the key findings from this ECIR?"
- Run
analyze_ecir.pyon the file - Extract the insights section
- Present insights with explanations
- Highlight any concerning variances
Task: "Compare these 5 ECIRs and tell me what patterns you see"
- Run
compare_ecirs.pyon all files - Review trend statistics
- Identify recurring patterns
- Highlight high-variance ECIRs
- Summarize findings
Task: "Which categories have the highest cost increases?"
- Run
analyze_ecir.pywith JSON format - Parse
category_impacts.top_cost_increases - Present top categories with specific dollar amounts
- Explain percentages
Task: "Show me all ECIRs with variance over 5%"
- Run
compare_ecirs.pyon all available ECIRs - Parse
patterns.high_variance_ecirs - List ECIRs exceeding threshold
- Provide context on each
Insight Generation
The analyzer generates automatic insights with emoji indicators:
- ⚠️ Major cost impact: Variance >5%
- ✓ Minor cost impact: Variance <5%
- 📝 High spec change rate: >30% Modified-Spec
- ➕ Scope increase: >20% Added items
- ➖ Scope reduction: >20% Deleted items
- 📊 Largest increase: Top category impact
- 🔍 Change reason: Why change occurred
Present these insights conversationally, explaining what they mean for the user.
Integration with ECIR Tool Workflow
This analyzer is designed to work with the ECIR Advanced Tool:
Standard workflow:
- User runs ECIR tool to compare before/after BOMs
- ECIR tool generates Excel report
- User uploads Excel report for analysis
- This skill extracts insights and patterns
- User makes decisions based on analysis
Advanced workflow (future):
- Automatic triggering after ECIR generation
- Pattern library building from analysis
- Predictive cost impact modeling
- Integration with approval workflow
Error Handling
If analysis fails:
- Check file is valid ECIR Excel output
- Verify all required sheets exist (Header, Summary, Detail_All)
- Confirm file isn't corrupted
- Check pandas/openpyxl are available
Common errors:
- Missing sheets: File may not be from ECIR tool
- Parsing errors: Excel file may be corrupted
- Empty data: ECIR may have been generated incorrectly
Dependencies
The analysis scripts require:
- Python 3.9+
- pandas
- openpyxl (for Excel reading)
These should already be available if the ECIR tool is installed.
Examples
Example 1: Quick Analysis
User: "Analyze this ECIR report and tell me the key findings"
Response:
- Run analyze_ecir.py
- Present: "I've analyzed ECIR-2024-045. Key findings: ⚠️ Major cost impact with 7.3% increase ($12,456.78). 📝 High specification change rate: 42% of items had spec changes, indicating significant product substitutions. 📊 Largest cost increase in Glulams: $8,234.56 (18.7%). The change was due to Supplier_Change from Green Mountain to Pliris."
Example 2: Trend Analysis
User: "Compare all my ECIRs from last month and show me patterns"
Response:
- Run compare_ecirs.py on all files
- Present: "I analyzed 8 ECIRs from last month. Average cost variance was 4.2% ($8,932.14). Top change reason: Supplier_Change (5 ECIRs). The most common change type was Modified-Spec (38% of all changes). 3 ECIRs exceeded 5% variance threshold and need executive review. There's a pattern of Glulam category driving high variance across multiple reports."
Example 3: Category Deep Dive
User: "Which categories are causing the most cost increases?"
Response:
- Run analyze_ecir.py with JSON output
- Parse category data
- Present: "Top 3 categories by cost increase: 1) Glulams: $8,234.56 (18.7%), 2) Hardware: $2,156.32 (5.3%), 3) Strapping: $1,432.11 (12.4%). Glulams show the largest absolute dollar increase, while Strapping has the highest percentage increase despite lower absolute value."
Best Practices
- Always run analysis before interpreting: Don't guess at metrics, use the scripts
- Provide context: Explain what metrics mean, not just values
- Highlight concerning patterns: Draw user attention to high variances
- Compare to baselines: Use trend analysis to show if current ECIR is typical
- Suggest actions: Based on insights, recommend next steps
Future Enhancements
As the ECIR platform evolves, this analyzer will support:
- Real-time analysis during ECIR creation
- Predictive variance modeling
- Automatic pattern library building
- Integration with approval workflows
- Dashboard visualization generation
- Machine learning for variance prediction
This skill is a stepping stone toward the learning-first platform vision.
Related Skills
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
Tensorboard
Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit
Deeptools
NGS analysis toolkit. BAM to bigWig conversion, QC (correlation, PCA, fingerprints), heatmaps/profiles (TSS, peaks), for ChIP-seq, RNA-seq, ATAC-seq visualization.
Scvi Tools
This skill should be used when working with single-cell omics data analysis using scvi-tools, including scRNA-seq, scATAC-seq, CITE-seq, spatial transcriptomics, and other single-cell modalities. Use this skill for probabilistic modeling, batch correction, dimensionality reduction, differential expression, cell type annotation, multimodal integration, and spatial analysis tasks.
Statsmodels
Statistical modeling toolkit. OLS, GLM, logistic, ARIMA, time series, hypothesis tests, diagnostics, AIC/BIC, for rigorous statistical inference and econometric analysis.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Neurokit2
Comprehensive biosignal processing toolkit for analyzing physiological data including ECG, EEG, EDA, RSP, PPG, EMG, and EOG signals. Use this skill when processing cardiovascular signals, brain activity, electrodermal responses, respiratory patterns, muscle activity, or eye movements. Applicable for heart rate variability analysis, event-related potentials, complexity measures, autonomic nervous system assessment, psychophysiology research, and multi-modal physiological signal integration.
Statistical Analysis
Statistical analysis toolkit. Hypothesis tests (t-test, ANOVA, chi-square), regression, correlation, Bayesian stats, power analysis, assumption checks, APA reporting, for academic research.
