Data Engineering
by qenex-ai
This skill should be used when the user asks to "create data pipeline", "ETL process", "data lake", "data warehouse", "Apache Spark", "Airflow DAG", "data streaming", "Kafka pipeline", "dbt models", "data quality", or needs help with data engineering and pipeline development.
Skill Details
Repository Files
1 file in this skill directory
name: Data Engineering description: This skill should be used when the user asks to "create data pipeline", "ETL process", "data lake", "data warehouse", "Apache Spark", "Airflow DAG", "data streaming", "Kafka pipeline", "dbt models", "data quality", or needs help with data engineering and pipeline development. version: 1.0.0
Data Engineering
Comprehensive guidance for data pipelines, ETL processes, and data infrastructure.
Data Pipeline Patterns
Batch Processing (Apache Spark)
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ETL").getOrCreate()
# Extract
df = spark.read.parquet("s3://data-lake/raw/events/")
# Transform
transformed = df \
.filter(df.event_type == "purchase") \
.groupBy("user_id", "date") \
.agg({"amount": "sum"})
# Load
transformed.write.mode("overwrite").parquet("s3://data-lake/processed/daily_purchases/")
Stream Processing (Kafka + Flink)
# Kafka consumer
from kafka import KafkaConsumer
consumer = KafkaConsumer(
'events',
bootstrap_servers=['kafka:9092'],
auto_offset_reset='earliest',
group_id='my-group'
)
for message in consumer:
process_event(message.value)
Airflow DAGs
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'data-team',
'retries': 3,
'retry_delay': timedelta(minutes=5)
}
with DAG(
'daily_etl',
default_args=default_args,
schedule_interval='0 6 * * *',
start_date=datetime(2024, 1, 1),
catchup=False
) as dag:
extract = PythonOperator(
task_id='extract',
python_callable=extract_data
)
transform = PythonOperator(
task_id='transform',
python_callable=transform_data
)
load = PythonOperator(
task_id='load',
python_callable=load_data
)
extract >> transform >> load
dbt Models
-- models/staging/stg_orders.sql
{{ config(materialized='view') }}
SELECT
id as order_id,
user_id,
created_at,
total_amount
FROM {{ source('raw', 'orders') }}
WHERE created_at >= '2024-01-01'
Data Quality
from great_expectations.core import ExpectationSuite
suite = ExpectationSuite("orders_suite")
suite.add_expectation(
expectation_type="expect_column_values_to_not_be_null",
kwargs={"column": "order_id"}
)
suite.add_expectation(
expectation_type="expect_column_values_to_be_between",
kwargs={"column": "amount", "min_value": 0, "max_value": 10000}
)
Additional Resources
Reference Files
references/data-patterns.md- Data pipeline patternsreferences/dbt-guide.md- dbt best practices
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
