Data Load
by shandley
Load data from files (CSV, JSON, JSONL, Parquet) or stdin for analysis and visualization with ggterm. Use when reading datasets, importing data, opening files, or when the user mentions loading, reading, or opening data.
Skill Details
Repository Files
1 file in this skill directory
name: data-load description: Load data from files (CSV, JSON, JSONL, Parquet) or stdin for analysis and visualization with ggterm. Use when reading datasets, importing data, opening files, or when the user mentions loading, reading, or opening data. allowed-tools: Bash(bun:), Bash(npx:), Read, Write
Data Loading for ggterm
Load data into arrays of records for use with ggterm plotting and analysis.
Quick Patterns by Format
CSV
import { parse } from 'csv-parse/sync'
import { readFileSync } from 'fs'
const text = readFileSync('data.csv', 'utf-8')
const data = parse(text, {
columns: true, // First row as headers
cast: true, // Auto-convert numbers
skip_empty_lines: true
})
Alternative with d3-dsv (lighter weight):
import { csvParse, autoType } from 'd3-dsv'
const data = csvParse(readFileSync('data.csv', 'utf-8'), autoType)
JSON
import { readFileSync } from 'fs'
// JSON array
const data = JSON.parse(readFileSync('data.json', 'utf-8'))
JSONL (Newline-delimited JSON)
const data = readFileSync('data.jsonl', 'utf-8')
.trim()
.split('\n')
.map(line => JSON.parse(line))
From stdin (Piped Data)
// Bun
const input = await Bun.stdin.text()
const data = JSON.parse(input)
// Node.js
import { stdin } from 'process'
let input = ''
for await (const chunk of stdin) input += chunk
const data = JSON.parse(input)
From URL
const response = await fetch('https://example.com/data.json')
const data = await response.json()
TSV (Tab-separated)
import { tsvParse, autoType } from 'd3-dsv'
const data = tsvParse(readFileSync('data.tsv', 'utf-8'), autoType)
Type Coercion
ggterm expects numeric values for position aesthetics. Ensure proper typing:
const typed = data.map(row => ({
...row,
// Convert date strings to timestamps
date: new Date(row.date).getTime(),
// Ensure numeric values
value: Number(row.value),
// Handle missing values
score: row.score != null ? Number(row.score) : null
}))
Common Type Issues
| Problem | Solution |
|---|---|
| Dates as strings | new Date(str).getTime() |
| Numbers as strings | Number(str) or parseFloat(str) |
| Empty strings | Check str !== '' before converting |
"NA" or "null" |
Map to null explicitly |
Verification
After loading, always verify the data structure:
console.log(`Loaded ${data.length} rows`)
console.log('Columns:', Object.keys(data[0]))
console.log('Sample row:', data[0])
// Check for type issues
const numericCols = ['value', 'count', 'score']
for (const col of numericCols) {
const nonNumeric = data.filter(r => typeof r[col] !== 'number')
if (nonNumeric.length > 0) {
console.warn(`${col}: ${nonNumeric.length} non-numeric values`)
}
}
Installing Dependencies
If needed, install data loading libraries:
# For CSV parsing
bun add csv-parse
# or
bun add d3-dsv
# For Parquet (if needed)
bun add parquet-wasm
Integration with ggterm
Once data is loaded, pass directly to ggterm:
import { gg, geom_point } from '@ggterm/core'
const data = loadData('measurements.csv')
const plot = gg(data)
.aes({ x: 'time', y: 'value' })
.geom(geom_point())
console.log(plot.render({ width: 80, height: 24 }))
Large Files
For large files, consider streaming or sampling:
// Sample every Nth row
const sampled = data.filter((_, i) => i % 10 === 0)
// Or take first N rows for exploration
const preview = data.slice(0, 1000)
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
