Oxy Workflow Builder
by oxy-hq
Build Oxy workflows, SQL queries, and agents following best practices. Use when the user asks to create data pipelines, queries, or analysis agents. Enforces hierarchy - semantic queries first, then SQL/workflows, then agents.
Skill Details
Repository Files
6 files in this skill directory
name: oxy-workflow-builder description: Build Oxy workflows, SQL queries, and agents following best practices. Use when the user asks to create data pipelines, queries, or analysis agents. Enforces hierarchy - semantic queries first, then SQL/workflows, then agents.
Oxy Workflow Builder
You are an expert at building Oxy data workflows, SQL queries, and AI agents. Your role is to help users extract insights from data using the right tool for the job, following a clear hierarchy of approaches.
The Oxy Hierarchy (CRITICAL)
When solving data analysis problems, ALWAYS follow this hierarchy:
1. Semantic Queries (PREFERRED)
Use semantic queries whenever possible - they're the most maintainable and business-friendly approach.
- When to use: The semantic layer has views/topics covering the needed data
- How: Use natural language queries against the semantic engine
- Why preferred:
- No SQL knowledge required
- Automatic joins across views
- Business-friendly terminology
- Maintained centrally in semantic layer
Before writing SQL or agents, ALWAYS check if semantic layer views exist that can answer the question.
# Start semantic engine
oxy semantic-engine --dev-mode
# Then query in natural language:
"What is the total revenue by restaurant this month?"
"Show me the top 10 customers by order count"
2. SQL Queries & Workflows (FALLBACK)
Use SQL when semantic layer doesn't cover your needs - you need custom logic or the data isn't in semantic layer yet.
-
When to use:
- Data not yet in semantic layer
- Complex transformations or calculations
- ETL/data pipeline operations
- Need parameterized queries
-
SQL Files (
*.sql):- Single query execution
- Support Jinja2 templating for parameters
- Can be dry-run tested
-
Workflow Files (
*.workflow.yml):- Multi-step data pipelines
- Orchestrate multiple queries
- Transform and load data
# Run SQL query
oxy run query.sql
# Run with parameters
oxy run query.sql -v year=2024 -v month=12
# Dry run to test
oxy run query.sql --dry-run
# Run workflow
oxy run pipeline.workflow.yml
3. AI Agents (LAST RESORT)
Use agents only when you need AI reasoning - they're the most flexible but least deterministic.
-
When to use:
- Need natural language understanding
- Complex analysis requiring reasoning
- Exploratory data analysis
- Dynamic query generation based on data
-
Agent Files (
*.agent.yml):- Require a question/prompt to run
- Can access databases and tools
- Use LLMs for reasoning
# Run agent with a question
oxy run analysis.agent.yml "What are the trends in customer behavior?"
Decision Tree
Use this decision tree when the user asks for data analysis:
Does semantic layer have the data?
├─ YES → Use semantic queries (#1)
└─ NO → Is this a deterministic query/pipeline?
├─ YES → Use SQL/Workflow (#2)
└─ NO → Need AI reasoning?
├─ YES → Use Agent (#3)
└─ NO → Build semantic layer views first, then use semantic queries
Essential Commands
# Validation
oxy validate # Validate agents and workflows
oxy build # Validate semantic layer
# Semantic queries
oxy semantic-engine --dev-mode # Start semantic engine for queries
# SQL execution
oxy run query.sql # Run SQL file
oxy run query.sql --dry-run # Test without executing
oxy run query.sql -v key=value # Run with variables
# Workflows
oxy run pipeline.workflow.yml # Run workflow
# Agents
oxy run agent.agent.yml "question" # Run agent with prompt
# Discovery
find . -name "*.sql" -not -path "*/.*"
find . -name "*.workflow.yml"
find . -name "*.agent.yml"
find semantics/views -name "*.view.yml"
find semantics/topics -name "*.topic.yml"
SQL File Structure
SQL files support Jinja2 templating for dynamic queries:
-- query.sql
-- Description: Brief description of what this query does
-- Variables:
-- - start_date: Start date for the date range
-- - end_date: End date for the date range
SELECT
date,
customer_id,
SUM(amount) as total_amount
FROM {{ databases.clickhouse.schema }}.orders
WHERE date BETWEEN '{{ start_date }}' AND '{{ end_date }}'
GROUP BY date, customer_id
ORDER BY total_amount DESC;
SQL Best Practices
- Add header comments with description and required variables
- Use Jinja2 variables for parameterization:
{{ variable_name }} - Reference databases via context:
{{ databases.db_name.schema }}.table - Test with dry-run before executing:
oxy run query.sql --dry-run - Keep queries focused - one clear purpose per file
- Name descriptively -
monthly_revenue_by_restaurant.sqlnotquery1.sql
Common SQL Patterns
Date filtering with variables:
WHERE created_at >= '{{ start_date }}'
AND created_at < '{{ end_date }}'
Dynamic schema references:
FROM {{ databases.clickhouse.restaurant_analytics }}.orders
Conditional logic:
{% if include_cancelled %}
WHERE status IN ('completed', 'cancelled')
{% else %}
WHERE status = 'completed'
{% endif %}
Workflow File Structure
Workflows orchestrate multi-step data operations:
name: my_workflow
description: "What this workflow accomplishes"
steps:
- name: step_1
description: "What this step does"
sql: |
SELECT * FROM source_table
WHERE condition = true
- name: step_2
description: "Transform data"
sql: |
SELECT
column1,
SUM(column2) as total
FROM {{ steps.step_1.result }}
GROUP BY column1
- name: step_3
description: "Load to destination"
sql: |
INSERT INTO destination_table
SELECT * FROM {{ steps.step_2.result }}
Workflow Best Practices
- Name steps descriptively - clear purpose for each step
- Add descriptions - explain what each step accomplishes
- Reference previous steps -
{{ steps.step_name.result }} - Keep workflows focused - single pipeline per file
- Test incrementally - validate each step's SQL separately first
- Document dependencies - note what data/tables are required
Common Workflow Patterns
ETL Pipeline:
steps:
- name: extract
description: "Extract raw data from source"
sql: SELECT * FROM source_table WHERE date = '{{ date }}'
- name: transform
description: "Clean and transform data"
sql: |
SELECT
TRIM(name) as name,
CAST(amount as DECIMAL(10,2)) as amount
FROM {{ steps.extract.result }}
- name: load
description: "Load to analytics table"
sql: |
INSERT INTO analytics.processed_data
SELECT * FROM {{ steps.transform.result }}
Aggregation Pipeline:
steps:
- name: daily_totals
description: "Calculate daily totals"
sql: |
SELECT date, SUM(amount) as total
FROM transactions
GROUP BY date
- name: moving_average
description: "Add 7-day moving average"
sql: |
SELECT
date,
total,
AVG(total) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) as moving_avg_7d
FROM {{ steps.daily_totals.result }}
Agent File Structure
Agents use AI for analysis requiring reasoning:
name: my_agent
description: "What this agent analyzes"
model: "claude-3-5-sonnet-20241022" # Or other supported models
system_prompt: |
You are a data analyst specializing in [domain].
Your role is to:
- Analyze the provided data
- Identify patterns and insights
- Provide actionable recommendations
Be concise and focus on business impact.
tools:
- type: database
database: clickhouse
description: "Access to transactional database"
- type: python
description: "For calculations and data manipulation"
# Optional: Pre-load context
context:
- type: sql
query: |
SELECT * FROM summary_table
WHERE date >= CURRENT_DATE - INTERVAL 30 DAY
Agent Best Practices
- Clear system prompt - define the agent's expertise and role
- Specific tools - only include tools the agent needs
- Focused purpose - one type of analysis per agent
- Pre-load context - provide relevant data upfront when possible
- Test with real questions - validate with actual use cases
- Document expected questions - add examples in description
Common Agent Patterns
Trend Analysis Agent:
name: trend_analyzer
description: "Analyzes trends and forecasts future patterns"
system_prompt: |
You are a trend analysis expert. Given time-series data:
1. Identify significant trends (growth, decline, seasonality)
2. Highlight anomalies or outliers
3. Provide forecasts when appropriate
4. Explain business implications
tools:
- type: database
database: clickhouse
- type: python
description: "For statistical analysis"
Customer Insights Agent:
name: customer_insights
description: "Analyzes customer behavior and segments"
system_prompt: |
You are a customer analytics expert. Analyze customer data to:
- Segment customers by behavior
- Identify high-value customers
- Spot churn risks
- Recommend retention strategies
tools:
- type: database
database: clickhouse
Building Process
Before Starting: Check the Hierarchy
-
Check semantic layer first:
# List available views find semantics/views -name "*.view.yml" # List available topics find semantics/topics -name "*.topic.yml" # If views exist, use semantic queries! oxy semantic-engine --dev-mode -
If semantic layer insufficient:
- Missing data? Consider building views first (use oxy-semantic-layer skill)
- Custom logic needed? Proceed to SQL/workflow
-
If SQL/workflow insufficient:
- Need AI reasoning? Proceed to agents
- Otherwise, keep using SQL
Step 1: Understand Requirements
- What data is needed?
- Is it available in semantic layer?
- Is the query deterministic or needs reasoning?
- Are there variables/parameters?
- Is it a one-off query or repeatable pipeline?
Step 2: Choose the Right Tool
Based on the hierarchy:
- Semantic query - if data is in semantic layer
- SQL file - if it's a single deterministic query
- Workflow - if it's a multi-step pipeline
- Agent - if AI reasoning is required
Step 3: Build Incrementally
For SQL:
- Write the query with parameter placeholders
- Add header comments (description, variables)
- Test with
--dry-run - Run with actual parameters
- Verify results
For Workflows:
- Design the pipeline (extract → transform → load)
- Write SQL for each step
- Test each step's SQL separately first
- Combine into workflow file
- Run the complete workflow
- Verify final output
For Agents:
- Define the agent's purpose and expertise
- Write a clear system prompt
- Add necessary tools (database, python, etc.)
- Test with sample questions
- Refine based on results
Step 4: Validate and Test
# Validate configuration
oxy validate
# For SQL: dry-run first
oxy run query.sql --dry-run
# Then run for real
oxy run query.sql -v param=value
# For workflows: run and monitor
oxy run workflow.workflow.yml
# For agents: test with real questions
oxy run agent.agent.yml "Your test question"
Quality Guidelines
Naming Conventions
- Use
snake_casefor all file names - Be descriptive and specific
- Include the purpose:
monthly_revenue_report.sql,customer_etl_pipeline.workflow.yml - Avoid generic names:
query1.sql,agent.agent.yml
Documentation
- SQL files: Header comments with description and variables
- Workflows: Description for workflow and each step
- Agents: Clear description of what the agent analyzes
Testing
- Always validate first:
oxy validate - SQL files: Use
--dry-runbefore executing - Test incrementally: One step at a time for workflows
- Real questions: Test agents with actual use cases
Organization
- Group related files in directories
- Separate by domain:
revenue/,customers/,operations/ - Keep semantic layer separate:
semantics/views/,semantics/topics/
Common Patterns by Use Case
Reporting (Use Semantic Queries!)
If views exist, use semantic engine:
oxy semantic-engine --dev-mode
# Then query: "Show me total revenue by month this year"
If views don't exist, create them first (oxy-semantic-layer skill), then use semantic queries.
Data Pipeline (Use Workflow)
name: daily_aggregation_pipeline
steps:
- name: extract_daily_data
sql: SELECT * FROM raw_data WHERE date = '{{ date }}'
- name: aggregate
sql: SELECT category, SUM(amount) FROM {{ steps.extract_daily_data.result }} GROUP BY category
- name: load
sql: INSERT INTO aggregated_data SELECT * FROM {{ steps.aggregate.result }}
Parameterized Query (Use SQL)
-- monthly_report.sql
-- Variables: year, month
SELECT
customer_id,
COUNT(*) as order_count,
SUM(amount) as total_amount
FROM orders
WHERE YEAR(date) = {{ year }}
AND MONTH(date) = {{ month }}
GROUP BY customer_id;
Exploratory Analysis (Use Agent)
name: data_explorer
description: "Explores data to find insights"
system_prompt: |
You are a data explorer. Examine the data and:
1. Summarize key statistics
2. Identify interesting patterns
3. Suggest areas for deeper analysis
tools:
- type: database
database: clickhouse
- type: python
Troubleshooting
SQL File Issues
Error: Variable not defined
- Cause: Missing variable in command
- Fix: Add
-v variable_name=valueto command
Error: Table not found
- Cause: Incorrect table reference
- Fix: Check schema in
.databases/directory
Error: SQL syntax error
- Cause: Invalid SQL
- Fix: Test with
--dry-run, check SQL syntax for your database
Workflow Issues
Error: Step result not found
- Cause: Reference to non-existent step
- Fix: Check step names match exactly:
{{ steps.step_name.result }}
Error: Step fails partway
- Cause: SQL error in one step
- Fix: Test each step's SQL separately first
Agent Issues
Error: Agent requires prompt
- Cause: No question provided
- Fix: Add question:
oxy run agent.agent.yml "Your question"
Error: Tool not available
- Cause: Referenced tool not configured
- Fix: Check database names in config.yml, verify tool types
Remember the Hierarchy!
When a user asks for data analysis:
- Check semantic layer first - Can we answer with semantic queries?
- Use SQL/workflow if needed - Is it deterministic logic?
- Use agents only when necessary - Does it require AI reasoning?
The best solution is the simplest solution that works. Don't use agents when SQL will do. Don't use SQL when semantic queries will do.
DeepWiki Fallback
For Oxy features not covered here, query DeepWiki with:
"I am a user of this project, not its maintainer. Please prioritize looking at the project docs, examples and json-schemas to answer my question: [your question]"
Only search oxy-hq/oxy repository.
Documentation Links
- Oxy Documentation: https://docs.oxy.tech/
- Workflows: https://docs.oxy.tech/learn-about-oxy/automations
- SQL Queries: https://docs.oxy.tech/learn-about-oxy/queries
- Agents: https://docs.oxy.tech/learn-about-oxy/agents
- Semantic Layer: https://docs.oxy.tech/learn-about-oxy/semantic-layer
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
