Dataset Engineering

by doanchienthangdev

data

Building and processing datasets - data quality, curation, deduplication, synthesis, annotation, formatting. Use when creating training data, improving data quality, or generating synthetic data.

Skill Details

Repository Files

1 file in this skill directory


name: dataset-engineering description: Building and processing datasets - data quality, curation, deduplication, synthesis, annotation, formatting. Use when creating training data, improving data quality, or generating synthetic data.

Dataset Engineering Skill

Building high-quality datasets for AI applications.

Data Quality Dimensions

Dimension Description Check
Accuracy Data is correct Validation
Completeness No missing values Schema check
Consistency No contradictions Dedup
Timeliness Up-to-date Timestamps
Relevance Matches use case Filtering

Data Curation Pipeline

class DataCurationPipeline:
    def run(self, raw_data):
        # 1. Inspect
        self.inspect(raw_data)

        # 2. Deduplicate
        data = self.deduplicator.dedupe(raw_data)

        # 3. Clean and filter
        data = self.cleaner.clean(data)
        data = self.filter.filter(data)

        # 4. Format
        return self.formatter.format(data)

Deduplication

from datasketch import MinHash, MinHashLSH

class Deduplicator:
    def __init__(self, threshold=0.8):
        self.lsh = MinHashLSH(threshold=threshold, num_perm=128)

    def minhash(self, text):
        m = MinHash(num_perm=128)
        for word in text.split():
            m.update(word.encode('utf8'))
        return m

    def dedupe(self, docs):
        unique = []
        for i, doc in enumerate(docs):
            mh = self.minhash(doc["text"])
            if not self.lsh.query(mh):
                self.lsh.insert(f"doc_{i}", mh)
                unique.append(doc)
        return unique

Data Synthesis

AI-Powered QA Generation

def generate_qa(document, model, n=5):
    prompt = f"""Generate {n} QA pairs from:

{document}

Format: [{{"question": "...", "answer": "..."}}]"""

    return json.loads(model.generate(prompt))

Self-Instruct

def self_instruct(seeds, model, n=100):
    generated = []

    for _ in range(n):
        samples = random.sample(seeds + generated[-20:], 5)
        prompt = f"Examples:\n{format(samples)}\n\nNew task:"

        new = model.generate(prompt)
        if is_valid(new) and is_diverse(new, generated):
            generated.append(new)

    return generated

Data Augmentation

def augment_text(text):
    methods = [
        lambda t: synonym_replace(t),
        lambda t: back_translate(t),
        lambda t: model.rephrase(t)
    ]
    return random.choice(methods)(text)

Data Formatting

Instruction Format

def format_instruction(example):
    return f"""### Instruction:
{example['instruction']}

### Input:
{example.get('input', '')}

### Response:
{example['output']}"""

Chat Format

def format_chat(conversation):
    return [
        {"role": turn["role"], "content": turn["content"]}
        for turn in conversation
    ]

Best Practices

  1. Inspect data before processing
  2. Deduplicate before expensive operations
  3. Use multiple synthesis methods
  4. Validate synthetic data quality
  5. Track data lineage

Related Skills

Xlsx

Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas

data

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Analyzing Financial Statements

This skill calculates key financial ratios and metrics from financial statement data for investment analysis

data

Data Storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

data

Kpi Dashboard Design

Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.

designdata

Dbt Transformation Patterns

Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.

testingdocumenttool

Sql Optimization Patterns

Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.

designdata

Anndata

This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.

arttooldata

Xlsx

Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.

tooldata

Skill Information

Category:Data
Last Updated:12/30/2025