Lumen Dashboards
by robertpelloni
Master declarative, no-code data dashboards with Lumen YAML specifications. Use this skill when building standard data exploration dashboards, connecting multiple data sources (files, databases, APIs), creating interactive filters and cross-filtering, designing responsive layouts with indicators and charts, or enabling rapid dashboard prototyping without writing code.
Skill Details
Repository Files
1 file in this skill directory
name: lumen-dashboards description: Master declarative, no-code data dashboards with Lumen YAML specifications. Use this skill when building standard data exploration dashboards, connecting multiple data sources (files, databases, APIs), creating interactive filters and cross-filtering, designing responsive layouts with indicators and charts, or enabling rapid dashboard prototyping without writing code. version: 2025-01-07 compatibility: Requires lumen >= 0.10.0, panel >= 1.3.0, holoviews >= 1.18.0, param >= 2.0.0. Supports PostgreSQL, DuckDB, SQLite, CSV, Parquet, Excel, and REST API data sources.
Lumen Dashboards Skill
Overview
Lumen is a declarative framework for creating data dashboards through YAML specifications. Build interactive data exploration dashboards without writing code - just configuration.
What is Lumen?
Lumen provides a declarative approach to building data dashboards:
- No-code dashboards: Define everything in YAML
- Data pipelines: Sources → Transforms → Views
- Interactive exploration: Built-in filters and cross-filtering
- Component library: Reusable sources, transforms, views
- Live updates: Auto-reload and real-time data
Lumen vs Panel vs Lumen AI
| Feature | Lumen Dashboards | Panel | Lumen AI |
|---|---|---|---|
| Approach | Declarative YAML | Imperative Python | Conversational |
| Code Required | No | Yes | No |
| Use Case | Fixed dashboards | Custom apps | Ad-hoc exploration |
| Flexibility | Medium | High | High |
| Development Speed | Very fast | Medium | Very fast |
Use Lumen when:
- Building standard data exploration dashboards
- Working with non-programmers
- Want rapid prototyping with configuration
- Need reproducible dashboard specifications
Use Panel when:
- Need fine-grained control over components
- Building custom application logic
- Creating novel interactions
Use Lumen AI when:
- Users need ad-hoc exploration
- Questions vary unpredictably
- Enabling self-service analytics
Quick Start
Installation
pip install lumen
Your First Dashboard
File: dashboard.yaml
sources:
data:
type: file
tables:
penguins: https://datasets.holoviz.org/penguins/v1/penguins.csv
pipelines:
main:
source: data
table: penguins
filters:
- type: widget
field: species
layouts:
- title: Penguin Explorer
views:
- type: hvplot
pipeline: main
kind: scatter
x: bill_length_mm
y: bill_depth_mm
by: species
title: Bill Dimensions
Launch:
lumen serve dashboard.yaml --show
Core Concepts
1. Sources
Data sources provide tables for your dashboard.
Supported sources:
- File: CSV, Parquet, Excel, JSON
- Database: PostgreSQL, DuckDB, SQLite
- REST API: JSON endpoints
- Intake: Data catalogs
Quick example:
sources:
mydata:
type: file
tables:
sales: ./data/sales.csv
See: Data Sources Reference for comprehensive source configuration.
2. Pipelines
Pipelines define data flows: Source → Filters → Transforms → Views
Basic pipeline:
pipelines:
sales_pipeline:
source: mydata
table: sales
filters:
- type: widget
field: region
transforms:
- type: aggregate
by: ['category']
aggregate:
total_sales: {revenue: sum}
Components:
- Filters: Interactive widgets for user input
- Transforms: Data manipulation (filter, aggregate, sort, SQL)
- Views: Visualizations and tables
3. Filters
Add interactive controls:
filters:
# Dropdown select
- type: widget
field: category
# Multi-select
- type: widget
field: region
multiple: true
# Date range
- type: widget
field: date
widget: date_range_slider
# Numeric slider
- type: param
parameter: min_revenue
widget_type: FloatSlider
start: 0
end: 100000
4. Transforms
Process data in pipelines:
Common transforms:
columns: Select specific columnsquery: Filter rows with pandas queryaggregate: Group and aggregatesort: Sort datasql: Custom SQL queries
Example:
transforms:
- type: columns
columns: ['date', 'region', 'revenue']
- type: query
query: "revenue > 1000"
- type: aggregate
by: ['region']
aggregate:
total: {revenue: sum}
avg: {revenue: mean}
See: Data Transforms Reference for all transform types.
5. Views
Visualize data:
View types:
hvplot: Interactive plots (line, scatter, bar, etc.)table: Data tablesindicator: KPI metricsvega: Vega-Lite specificationsaltair: Altair chartsplotly: Plotly charts
Example:
views:
- type: hvplot
pipeline: main
kind: line
x: date
y: revenue
by: category
- type: indicator
pipeline: main
field: total_revenue
title: Total Sales
format: '${value:,.0f}'
See: Views Reference for all view types and options.
6. Layouts
Arrange views on the page:
layouts:
- title: Overview
layout: [[0, 1, 2], [3], [4, 5]] # Grid positions
views:
- type: indicator
# View 0 config...
- type: indicator
# View 1 config...
- type: hvplot
# View 2 config...
Layout types:
- Grid:
[[0, 1], [2, 3]] - Tabs: Multiple layouts become tabs
- Responsive: Adapts to screen size
See: Layouts Reference for advanced layout patterns.
Common Patterns
Pattern 1: KPI Dashboard
sources:
metrics:
type: file
tables:
data: ./metrics.csv
pipelines:
kpis:
source: metrics
table: data
transforms:
- type: aggregate
aggregate:
total_revenue: {revenue: sum}
total_orders: {orders: sum}
avg_order_value: {revenue: mean}
layouts:
- title: KPIs
layout: [[0, 1, 2]]
views:
- type: indicator
pipeline: kpis
field: total_revenue
format: '${value:,.0f}'
- type: indicator
pipeline: kpis
field: total_orders
format: '{value:,.0f}'
- type: indicator
pipeline: kpis
field: avg_order_value
format: '${value:.2f}'
Pattern 2: Filtered Exploration
pipelines:
explorer:
source: mydata
table: sales
filters:
- type: widget
field: region
label: Region
- type: widget
field: category
label: Category
multiple: true
- type: widget
field: date
widget: date_range_slider
views:
- type: hvplot
kind: scatter
x: price
y: quantity
by: category
- type: table
page_size: 20
Pattern 3: Multi-Source Dashboard
sources:
sales_db:
type: postgres
connection_string: postgresql://localhost/sales
tables: [orders, customers]
inventory_file:
type: file
tables:
stock: ./inventory.csv
pipelines:
sales_pipeline:
source: sales_db
table: orders
inventory_pipeline:
source: inventory_file
table: stock
Pattern 4: Cross-Filtering
pipelines:
main:
source: data
table: sales
filters:
- type: widget
field: region
layouts:
- title: Analysis
views:
# Clicking bar filters other views
- type: hvplot
pipeline: main
kind: bar
x: category
y: revenue
selection_group: category_filter
# Responds to selection above
- type: hvplot
pipeline: main
kind: scatter
x: price
y: quantity
selection_group: category_filter
Pattern 5: SQL Transform
transforms:
- type: sql
query: |
SELECT
region,
category,
SUM(revenue) as total_revenue,
COUNT(*) as order_count,
AVG(revenue) as avg_order_value
FROM table
WHERE date >= '2024-01-01'
GROUP BY region, category
HAVING total_revenue > 10000
ORDER BY total_revenue DESC
Python API
While Lumen is designed for YAML, you can also use Python:
from lumen.sources import FileSource
from lumen.pipeline import Pipeline
from lumen.views import hvPlotView
from lumen.dashboard import Dashboard
import panel as pn
# Create source
source = FileSource(tables={'sales': './data/sales.csv'})
# Create pipeline
pipeline = Pipeline(source=source, table='sales')
# Create view
view = hvPlotView(
pipeline=pipeline,
kind='scatter',
x='price',
y='quantity'
)
# Create dashboard
dashboard = Dashboard(
pipelines={'main': pipeline},
layouts=[view]
)
# Serve
dashboard.servable()
See: Python API Reference for detailed API usage.
Configuration
Global Config
config:
title: My Dashboard
theme: dark # or 'default', 'material'
sizing_mode: stretch_width
logo: ./logo.png
favicon: ./favicon.ico
layout: column # or 'grid', 'tabs'
Themes
config:
theme: material
theme_json:
palette:
primary: '#00aa41'
secondary: '#616161'
Authentication
# Serve with auth
lumen serve dashboard.yaml \
--oauth-provider=generic \
--oauth-key=${OAUTH_KEY} \
--oauth-secret=${OAUTH_SECRET}
Deployment
Development
# Local with auto-reload
lumen serve dashboard.yaml --autoreload --show
# Specific port
lumen serve dashboard.yaml --port 5007
Production
# Production server
panel serve dashboard.yaml \
--port 80 \
--num-procs 4 \
--allow-websocket-origin=analytics.company.com
Docker
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY dashboard.yaml data/ ./
CMD ["lumen", "serve", "dashboard.yaml", "--port", "5006", "--address", "0.0.0.0"]
See: Deployment Guide for production deployment best practices.
Best Practices
1. Source Organization
# ✅ Good: Descriptive names
sources:
sales_database:
type: postgres
tables: [orders, customers]
inventory_files:
type: file
tables:
stock: ./inventory.csv
# ❌ Bad: Generic names
sources:
db1:
type: postgres
file1:
type: file
2. Pipeline Reusability
# Define reusable pipelines
pipelines:
base_sales:
source: data
table: sales
filters:
- type: widget
field: region
summary_sales:
pipeline: base_sales # Extends base_sales
transforms:
- type: aggregate
by: ['category']
aggregate:
total: {revenue: sum}
3. Performance
# Limit data size for large tables
sources:
bigdata:
type: postgres
tables:
events: "SELECT * FROM events WHERE date >= '2024-01-01' LIMIT 100000"
4. User Experience
# Provide clear labels and formatting
filters:
- type: widget
field: region
label: "Sales Region" # Clear label
views:
- type: indicator
field: revenue
title: "Total Revenue"
format: '${value:,.0f}' # Formatted display
Troubleshooting
Dashboard Won't Load
# Check YAML syntax
python -c "import yaml; yaml.safe_load(open('dashboard.yaml'))"
# Run with debug logging
lumen serve dashboard.yaml --log-level=debug
Data Not Showing
- Verify data source path/connection
- Check table names match YAML config
- Ensure columns referenced exist in data
Performance Issues
- Limit query results (use SQL WHERE clauses)
- Reduce number of rows displayed
- Use aggregation before visualization
See: Troubleshooting Guide for common issues.
Progressive Learning Path
Level 1: Basics
- Create simple file-based dashboard
- Add filters
- Create basic views
Resources:
- Quick Start (this doc)
- Data Sources Reference
Level 2: Transforms
- Filter and aggregate data
- Use SQL transforms
- Chain multiple transforms
Resources:
Level 3: Advanced Layouts
- Multi-page dashboards
- Cross-filtering
- Custom themes
Resources:
Level 4: Production
- Database integration
- Authentication
- Deployment
Resources:
Additional Resources
Documentation
- Data Sources Reference - All source types and configuration
- Data Transforms Reference - Complete transform reference
- Views Reference - All visualization types
- Layouts Reference - Layout patterns and organization
- Python API Reference - Programmatic dashboard creation
- Deployment Guide - Production deployment
- Examples - Complete dashboard examples
- Troubleshooting Guide - Common issues
External Links
Use Cases
Business Intelligence
- Executive dashboards
- Sales analytics
- Financial reporting
- Operational metrics
Data Exploration
- Dataset overview
- Interactive filtering
- Drill-down analysis
- Comparative views
Real-Time Monitoring
- Live data feeds
- Alert dashboards
- System metrics
- Performance tracking
Reporting
- Scheduled reports
- Standardized views
- Shareable dashboards
- Embedded analytics
Summary
Lumen enables rapid dashboard development through declarative YAML specifications.
Strengths:
- No Python code required
- Fast development cycle
- Reproducible specifications
- Built-in interactivity
- Standard dashboard patterns
Ideal for:
- Fixed dashboard layouts
- Standard data patterns
- Non-programmer dashboard creators
- Rapid prototyping
Consider alternatives when:
- Need custom application logic → Panel Dashboards
- Need ad-hoc exploration → Lumen AI
- Building novel interactions → Panel Dashboards
Related Skills
- Lumen AI - Conversational data exploration
- Panel Dashboards - Custom Python dashboards
- Plotting Fundamentals - Quick plotting with hvPlot
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Team Composition Analysis
This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity allocation", or requests organizational design and headcount planning for a startup.
Startup Financial Modeling
This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estimate runway", "model cash flow", or requests 3-5 year financial planning for a startup.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Startup Metrics Framework
This skill should be used when the user asks about "key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", "marketplace metrics", or requests guidance on tracking and optimizing business performance metrics.
