Csv Data Wrangler

by 404kidwiz

tooldata

Expert in high-performance CSV processing, parsing, and data cleaning using Python, DuckDB, and command-line tools. Use when working with CSV files, cleaning data, transforming datasets, or processing large tabular data files.

Skill Details

Repository Files

1 file in this skill directory


name: csv-data-wrangler description: Expert in high-performance CSV processing, parsing, and data cleaning using Python, DuckDB, and command-line tools. Use when working with CSV files, cleaning data, transforming datasets, or processing large tabular data files.

CSV Data Wrangler

Purpose

Provides expertise in efficient CSV file processing, data cleaning, and transformation. Handles large files, encoding issues, malformed data, and performance optimization for tabular data workflows.

When to Use

  • Processing large CSV files efficiently
  • Cleaning and validating CSV data
  • Transforming and reshaping datasets
  • Handling encoding and delimiter issues
  • Merging or splitting CSV files
  • Converting between tabular formats
  • Querying CSV with SQL (DuckDB)

Quick Start

Invoke this skill when:

  • Processing large CSV files efficiently
  • Cleaning and validating CSV data
  • Transforming and reshaping datasets
  • Handling encoding and delimiter issues
  • Querying CSV with SQL

Do NOT invoke when:

  • Building Excel files with formatting (use xlsx-skill)
  • Statistical analysis of data (use data-analyst)
  • Building data pipelines (use data-engineer)
  • Database operations (use sql-pro)

Decision Framework

Tool Selection by File Size:
├── < 100MB → pandas
├── 100MB - 1GB → pandas with chunking or polars
├── 1GB - 10GB → DuckDB or polars
├── > 10GB → DuckDB, Spark, or streaming
└── Quick exploration → csvkit or xsv CLI

Processing Type:
├── SQL-like queries → DuckDB
├── Complex transforms → pandas/polars
├── Simple filtering → csvkit/xsv
└── Streaming → Python csv module

Core Workflows

1. Large CSV Processing

  1. Profile file (size, encoding, delimiter)
  2. Choose appropriate tool for scale
  3. Process in chunks if memory-constrained
  4. Handle encoding issues (UTF-8, Latin-1)
  5. Validate data types per column
  6. Write output with proper quoting

2. Data Cleaning Pipeline

  1. Load sample to understand structure
  2. Identify missing and malformed values
  3. Define cleaning rules per column
  4. Apply transformations
  5. Validate output quality
  6. Log cleaning statistics

3. CSV Query with DuckDB

  1. Point DuckDB at CSV file(s)
  2. Let DuckDB infer schema
  3. Write SQL queries directly
  4. Export results to new CSV
  5. Optionally persist as Parquet

Best Practices

  • Always specify encoding explicitly
  • Use chunked reading for large files
  • Profile before choosing tools
  • Preserve original files, write to new
  • Validate row counts before/after
  • Handle quoted fields and escapes properly

Anti-Patterns

Anti-Pattern Problem Correct Approach
Loading all to memory OOM on large files Use chunking or streaming
Guessing encoding Corrupted characters Detect with chardet first
Ignoring quoting Broken field parsing Use proper CSV parser
No validation Silent data corruption Validate row/column counts
Manual string splitting Breaks on edge cases Use csv module or pandas

Related Skills

Xlsx

Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas

data

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Analyzing Financial Statements

This skill calculates key financial ratios and metrics from financial statement data for investment analysis

data

Data Storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

data

Kpi Dashboard Design

Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.

designdata

Dbt Transformation Patterns

Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.

testingdocumenttool

Sql Optimization Patterns

Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.

designdata

Anndata

This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.

arttooldata

Xlsx

Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.

tooldata

Skill Information

Category:Technical
Last Updated:1/16/2026