Aps Doc Staging

by treasure-data

documentdata

Expert documentation generation for staging transformation layers. Auto-detects SQL engine (Presto/Trino vs Hive), documents transformation rules, PII handling, deduplication strategies, and data quality rules. Use when documenting staging transformations.

Skill Details

Repository Files

1 file in this skill directory


name: aps-doc-staging description: Expert documentation generation for staging transformation layers. Auto-detects SQL engine (Presto/Trino vs Hive), documents transformation rules, PII handling, deduplication strategies, and data quality rules. Use when documenting staging transformations.

APS Staging Transformation Documentation Expert

Specialized skill for generating comprehensive documentation for staging transformation layers. Automatically detects SQL engines, extracts transformation rules, documents PII handling, and analyzes deduplication strategies.

When to Use This Skill

Use this skill when:

  • Documenting staging transformation workflows
  • Creating documentation for data cleaning and standardization logic
  • Documenting PII handling and security transformations
  • Creating documentation for deduplication strategies
  • Documenting data quality rules and validations
  • Generating documentation for Presto/Trino or Hive transformations

Example requests:

"Document the staging transformation for customer events"
"Create staging layer documentation with transformation rules"
"Document PII handling in staging transformations"
"Generate staging documentation following this template: [Confluence URL]"

🚨 MANDATORY: Codebase Access Required

WITHOUT codebase access = NO documentation. Period.

If no codebase access provided:

I cannot create technical documentation without codebase access.

Required:
- Directory path to staging workflows
- Access to .dig, .sql, .yml files

Without access, I cannot extract real transformation SQL, PII logic, or table names.
Provide path: "Code is in /path/to/staging/"

Before proceeding:

  1. Ask for codebase path if not provided
  2. Use Glob to verify SQL files exist
  3. STOP if cannot read files

Documentation MUST contain:

  • Real transformation SQL from .sql files
  • Actual PII hashing/masking logic
  • Real table/column names
  • Working SQL examples from code

NO generic placeholders. Only real, extracted data.

REQUIRED Documentation Template

Follow this EXACT structure (analyzed from production examples):

# Staging Transformation - {Engine} Engine

## Overview
**Engine**: {Presto/Trino or Hive}
**Architecture**: {Loop-based / Other}
**Processing Mode**: {Incremental / Full}
**Location**: {directory path}

### Key Characteristics
{List key features from actual workflow}

---

## Architecture Overview

### Directory Structure
{Actual directory tree from codebase}

### Core Components

#### 1. Main Workflow File
{Name and purpose}

**Key Features:**
- {Feature from actual .dig file}
- {Feature from actual .dig file}

**Workflow Phases:**
{Extract from actual workflow}

#### 2. Configuration File
{Name and structure from actual codebase}

**Configuration Structure:**
{Real YAML structure}

**Table Configuration Fields:**
{Document actual fields used}

#### 3. SQL Transformation Files
{Types: init, incremental, upsert - from actual codebase}

---

## Processing Flow

### Initial Load (First Run)
{Step-by-step from actual workflow}

### Incremental Load (Subsequent Runs)
{Step-by-step from actual workflow}

---

## Data Transformation Rules

{Document ACTUAL transformation rules from codebase}

### 1. Date/Timestamp Processing
{Real SQL examples from transformation files}

### 2. String Standardization
{Real SQL examples}

### 3. JSON Extraction
{Real examples if exists}

### 4. Email Processing
{Real examples if exists}

### 5. Phone Number Processing
{Real examples if exists}

### 6. Deduplication Logic
{Real ROW_NUMBER() or DISTINCT logic}

### 7. Metadata Columns
{Real source_system, load_timestamp columns}

---

## Table-Specific Transformation Rules

{If using reference table like staging_trnsfrm_rules:}

**Reference Table**: {database}.{table}
**Purpose**: {explain}

**Schema**: {real schema}

**How Used**: {explain how workflow reads these rules}

---

## Current Implementation

**Configured Tables**:
{List actual tables from config}

---

## How to Add New Source Tables

{Step-by-step with real examples}

---

## Monitoring & Troubleshooting

**Key Queries**:
{Real SQL for checking status, data quality}

**Common Issues**:
{Real issues and solutions}

---

## Best Practices

{List from actual production experience}

---

## Summary

{Brief recap of capabilities}

Template Usage Notes:

  • Read actual workflows (.dig), configs (.yml), SQL files
  • Extract REAL transformation logic from SQL
  • Document REAL deduplication strategies
  • Use actual table/column names from codebase
  • Include working SQL examples
  • NO placeholders - only real extracted data

Summary

This skill generates production-ready staging documentation by:

  • Reading actual .dig workflows, .yml configs, and .sql files
  • Following the exact template structure shown above
  • Extracting real transformation rules from SQL
  • Documenting actual deduplication logic
  • Creating comprehensive documentation with working SQL examples

Key capability: Transforms staging codebase into professional Confluence documentation with all transformation rules documented.

Related Skills

Xlsx

Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas

data

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Analyzing Financial Statements

This skill calculates key financial ratios and metrics from financial statement data for investment analysis

data

Data Storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

data

Kpi Dashboard Design

Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.

designdata

Dbt Transformation Patterns

Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.

testingdocumenttool

Sql Optimization Patterns

Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.

designdata

Clinical Decision Support

Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo

developmentdocumentcli

Anndata

This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.

arttooldata

Skill Information

Category:Document
Last Updated:11/28/2025