Fairness Bias Auditor
by AwsomeTyper
Evaluates machine learning models for demographic bias using the fairlearn library. Use this skill immediately after training any predictive model.
Skill Details
Repository Files
1 file in this skill directory
name: fairness-bias-auditor description: Evaluates machine learning models for demographic bias using the fairlearn library. Use this skill immediately after training any predictive model.
Fairness & Bias Auditor
Automated decision systems in education can perpetuate inequality. Run this audit after training any predictive model.
Audit Protocol
1. Metric Calculation
Use fairlearn to calculate:
- Demographic Parity Difference: Selection rates across groups
- Equalized Odds: True positive/false positive rates across groups
2. Protected Attributes
For higher education analytics, use:
PCTPELL(Pell Grant rate) as socioeconomic proxy- Racial demographics where available
3. Four-Fifths Rule
If selection rate ratio between privileged and unprivileged groups < 0.8, flag as violation.
from fairlearn.metrics import demographic_parity_ratio
ratio = demographic_parity_ratio(
y_true,
y_pred,
sensitive_features=sensitive_group
)
if ratio < 0.8:
print("WARNING: Four-fifths rule violation detected")
Visualization Requirements
- Generate disparity plots using CGI color palette
- Always include the audit results visibly in deliverables
- Document any mitigations applied (e.g., sample reweighting)
Remediation Options
If bias detected:
- Adjust sample weights to achieve demographic parity
- Use
fairlearn.reductions.ExponentiatedGradientfor constrained optimization - Document the trade-off between accuracy and fairness
Related Skills
Attack Tree Construction
Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
Grafana Dashboards
Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Matplotlib
Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.
Scientific Visualization
Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.
Seaborn
Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.
Shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Query Writing
For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Scientific Visualization
Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.
