Bio Proteomics Differential Abundance
by GPTomics
Statistical testing for differentially abundant proteins between conditions. Covers limma and MSstats workflows with multiple testing correction. Use when identifying proteins with significant abundance changes between experimental groups.
Skill Details
Repository Files
3 files in this skill directory
name: bio-proteomics-differential-abundance description: Statistical testing for differentially abundant proteins between conditions. Covers limma and MSstats workflows with multiple testing correction. Use when identifying proteins with significant abundance changes between experimental groups. tool_type: mixed primary_tool: MSstats
Differential Protein Abundance
MSstats Group Comparison
library(MSstats)
# After dataProcess()
comparison_matrix <- matrix(c(1, -1, 0, 0,
1, 0, -1, 0,
0, 1, -1, 0),
nrow = 3, byrow = TRUE)
rownames(comparison_matrix) <- c('Treatment1-Control', 'Treatment2-Control', 'Treatment1-Treatment2')
colnames(comparison_matrix) <- c('Control', 'Treatment1', 'Treatment2', 'Treatment3')
results <- groupComparison(contrast.matrix = comparison_matrix, data = processed)
# Significant proteins
sig_proteins <- results$ComparisonResult[results$ComparisonResult$adj.pvalue < 0.05 &
abs(results$ComparisonResult$log2FC) > 1, ]
limma for Proteomics
library(limma)
# Log2 intensities matrix (proteins x samples)
design <- model.matrix(~ 0 + condition, data = sample_info)
colnames(design) <- levels(sample_info$condition)
fit <- lmFit(protein_matrix, design)
contrast_matrix <- makeContrasts(Treatment - Control, levels = design)
fit2 <- contrasts.fit(fit, contrast_matrix)
fit2 <- eBayes(fit2)
results <- topTable(fit2, number = Inf, adjust.method = 'BH')
sig_results <- results[results$adj.P.Val < 0.05 & abs(results$logFC) > 1, ]
QFeatures/proDA (Modern Alternative)
library(QFeatures)
library(proDA)
# proDA handles missing values probabilistically
fit <- proDA(protein_matrix, design = ~ condition, data = sample_info)
# Test differential abundance
results <- test_diff(fit, contrast = 'conditionTreatment')
results$adj_pval <- p.adjust(results$pval, method = 'BH')
sig_results <- results[results$adj_pval < 0.05 & abs(results$diff) > 1, ]
Python: scipy/statsmodels
import pandas as pd
import numpy as np
from scipy import stats
from statsmodels.stats.multitest import multipletests
def differential_test(intensities, group1_cols, group2_cols):
results = []
for protein in intensities.index:
g1 = intensities.loc[protein, group1_cols].dropna()
g2 = intensities.loc[protein, group2_cols].dropna()
if len(g1) >= 2 and len(g2) >= 2:
stat, pval = stats.ttest_ind(g1, g2)
log2fc = g2.mean() - g1.mean()
results.append({'protein': protein, 'log2FC': log2fc, 'pvalue': pval})
df = pd.DataFrame(results)
df['adj_pvalue'] = multipletests(df['pvalue'], method='fdr_bh')[1]
return df
# Significance thresholds
sig = results[(results['adj_pvalue'] < 0.05) & (abs(results['log2FC']) > 1)]
Visualization
# Volcano plot
library(ggplot2)
ggplot(results, aes(x = log2FC, y = -log10(adj.P.Val))) +
geom_point(aes(color = significant), alpha = 0.6) +
geom_hline(yintercept = -log10(0.05), linetype = 'dashed') +
geom_vline(xintercept = c(-1, 1), linetype = 'dashed') +
scale_color_manual(values = c('grey', 'red')) +
theme_minimal()
Related Skills
- quantification - Prepare normalized data for testing
- differential-expression/deseq2-basics - Similar concepts for RNA-seq
- data-visualization/specialized-omics-plots - Volcano plots, MA plots
Related Skills
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Dask
Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.
Senior Data Scientist
World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Hypogenic
Automated hypothesis generation and testing using large language models. Use this skill when generating scientific hypotheses from datasets, combining literature insights with empirical data, testing hypotheses against observational data, or conducting systematic hypothesis exploration for research discovery in domains like deception detection, AI content detection, mental health analysis, or other empirical research tasks.
Polars
Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows.
Ux Researcher Designer
UX research and design toolkit for Senior UX Designer/Researcher including data-driven persona generation, journey mapping, usability testing frameworks, and research synthesis. Use for user research, persona creation, journey mapping, and design validation.
Scikit Survival
Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
Dask
Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.
Anndata
Data structure for annotated matrices in single-cell analysis. Use when working with .h5ad files or integrating with the scverse ecosystem. This is the data format skill—for analysis workflows use scanpy; for probabilistic models use scvi-tools; for population-scale queries use cellxgene-census.
