Bio Spatial Transcriptomics Spatial Statistics
by GPTomics
Compute spatial statistics for spatial transcriptomics data using Squidpy. Calculate Moran's I, Geary's C, spatial autocorrelation, co-occurrence analysis, and neighborhood enrichment. Use when computing spatial autocorrelation or co-occurrence statistics.
Skill Details
Repository Files
3 files in this skill directory
name: bio-spatial-transcriptomics-spatial-statistics description: Compute spatial statistics for spatial transcriptomics data using Squidpy. Calculate Moran's I, Geary's C, spatial autocorrelation, co-occurrence analysis, and neighborhood enrichment. Use when computing spatial autocorrelation or co-occurrence statistics. tool_type: python primary_tool: squidpy
Spatial Statistics
Compute spatial statistics and identify spatially variable features.
Required Imports
import squidpy as sq
import scanpy as sc
import pandas as pd
import numpy as np
Compute Spatial Autocorrelation (Moran's I)
# Requires spatial neighbors
sq.gr.spatial_neighbors(adata, coord_type='generic', n_neighs=6)
# Compute Moran's I for all genes (can be slow)
sq.gr.spatial_autocorr(adata, mode='moran')
# Or for specific genes
sq.gr.spatial_autocorr(adata, mode='moran', genes=['GENE1', 'GENE2', 'GENE3'])
# Results stored in adata.uns['moranI']
moran_results = adata.uns['moranI']
print(moran_results.head(20))
Interpret Moran's I
# Moran's I ranges from -1 to 1
# I > 0: positive spatial autocorrelation (similar values cluster)
# I = 0: random spatial distribution
# I < 0: negative spatial autocorrelation (dissimilar values cluster)
# Get significantly spatially variable genes
svg = moran_results[moran_results['pval_norm'] < 0.05].sort_values('I', ascending=False)
print(f'Found {len(svg)} spatially variable genes (p < 0.05)')
print('\nTop 10 spatially variable genes:')
print(svg.head(10)[['I', 'pval_norm']])
Compute Geary's C
# Alternative spatial autocorrelation measure
sq.gr.spatial_autocorr(adata, mode='geary')
# Results in adata.uns['gearyC']
geary_results = adata.uns['gearyC']
# C < 1: positive spatial autocorrelation
# C = 1: random
# C > 1: negative spatial autocorrelation
Co-occurrence Analysis
# Analyze co-localization of cell types/clusters
# First, ensure you have cluster labels
sc.pp.neighbors(adata)
sc.tl.leiden(adata)
# Compute co-occurrence
sq.gr.co_occurrence(adata, cluster_key='leiden')
# Results in adata.uns['leiden_co_occurrence']
# Visualize co-occurrence
sq.pl.co_occurrence(adata, cluster_key='leiden')
Interpret Co-occurrence
co_occ = adata.uns['leiden_co_occurrence']
occ_matrix = co_occ['occ'] # Occurrence matrix
interval = co_occ['interval'] # Distance intervals
# occ_matrix[i, j, k] = occurrence of cluster j around cluster i at distance interval k
print(f'Occurrence matrix shape: {occ_matrix.shape}')
print(f'Distance intervals: {interval}')
Neighborhood Enrichment
# Test if clusters are enriched in each other's neighborhoods
sq.gr.nhood_enrichment(adata, cluster_key='leiden')
# Results in adata.uns['leiden_nhood_enrichment']
# zscore > 0: clusters co-localize more than expected
# zscore < 0: clusters avoid each other
# Visualize
sq.pl.nhood_enrichment(adata, cluster_key='leiden')
Extract Enrichment Z-scores
enrichment = adata.uns['leiden_nhood_enrichment']
zscore = enrichment['zscore']
clusters = adata.obs['leiden'].cat.categories
# Convert to DataFrame
zscore_df = pd.DataFrame(zscore, index=clusters, columns=clusters)
print('Neighborhood enrichment z-scores:')
print(zscore_df)
Ripley's Statistics
# Ripley's K/L function for point pattern analysis (single-cell resolution data)
sq.gr.ripley(adata, cluster_key='leiden', mode='L')
# Results in adata.uns['leiden_ripley']
sq.pl.ripley(adata, cluster_key='leiden')
Centrality Scores
# Compute centrality of each cell type
sq.gr.centrality_scores(adata, cluster_key='leiden')
# Results in adata.uns['leiden_centrality_scores']
centrality = adata.uns['leiden_centrality_scores']
print(centrality)
Interaction Matrix
# Build interaction matrix between clusters
sq.gr.interaction_matrix(adata, cluster_key='leiden')
# Results in adata.uns['leiden_interactions']
interactions = adata.uns['leiden_interactions']
print(interactions)
Custom Spatial Statistic
from scipy.stats import pearsonr
def spatial_correlation(adata, gene1, gene2):
'''Compute spatial correlation between two genes'''
expr1 = adata[:, gene1].X.toarray().flatten()
expr2 = adata[:, gene2].X.toarray().flatten()
r, p = pearsonr(expr1, expr2)
return r, p
r, p = spatial_correlation(adata, 'GENE1', 'GENE2')
print(f'Spatial correlation: r={r:.3f}, p={p:.2e}')
Local Moran's I (LISA)
from esda.moran import Moran_Local
from libpysal.weights import KNN
# Build weights matrix
coords = adata.obsm['spatial']
w = KNN.from_array(coords, k=6)
w.transform = 'r'
# Compute local Moran's I for a gene
gene_expr = adata[:, 'GENE1'].X.toarray().flatten()
lisa = Moran_Local(gene_expr, w)
# Add to adata
adata.obs['GENE1_lisa'] = lisa.Is
adata.obs['GENE1_lisa_q'] = lisa.q # Quadrant (HH, HL, LH, LL)
Batch Spatial Statistics
# Compute Moran's I for top variable genes only
hvg = adata.var_names[adata.var['highly_variable']][:500]
sq.gr.spatial_autocorr(adata, mode='moran', genes=hvg)
results = adata.uns['moranI']
significant = results[results['pval_norm'] < 0.01]
print(f'{len(significant)} genes with significant spatial autocorrelation')
Related Skills
- spatial-neighbors - Build spatial graphs (prerequisite)
- spatial-domains - Identify spatial domains
- spatial-visualization - Visualize spatial statistics
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
