Analyze Logfire Data

by dsfaccini

data

Query Logfire database and generate rich dashboards from results. Use when analyzing telemetry data, creating visualizations, or exploring Logfire records.

Skill Details

Repository Files

2 files in this skill directory


name: analyze-logfire-data description: Query Logfire database and generate rich dashboards from results. Use when analyzing telemetry data, creating visualizations, or exploring Logfire records. argument-hint: '[analysis goal]'

Analyze Logfire Data

Query the Logfire database and generate interactive dashboards from telemetry data.

Prerequisites

  • LOGFIRE_READ_TOKEN environment variable must be set
  • Run queries using: uv run --with logfire python logfire_query.py

Workflow

1. Understand the Schema

First, use the mcp__logfire__schema_reference tool to understand the database schema. Key tables and columns:

  • records table contains spans and logs
  • Important columns: message, span_name, trace_id, exception_type, exception_message, start_timestamp, service_name, attributes
  • Use -> and ->> operators for JSON fields in attributes

2. Query the Data

Use logfire_query.py from this skill directory to execute queries:

from logfire_query import query_sync, load_results

# Execute query and save results
query_sync(
    '''
    SELECT span_name, count(*) as count
    FROM records
    WHERE start_timestamp > now() - INTERVAL '1 hour'
    GROUP BY span_name
    ORDER BY count DESC
    LIMIT 20
    ''',
    'results.json'
)

# Load results for analysis
rows = load_results('results.json')

Alternatively, use the mcp__logfire__arbitrary_query tool directly for simpler queries.

3. Generate Dashboards

Write custom Plotly code for rich, interactive dashboards. Do NOT use predefined chart utilities - write the visualization code directly.

Dashboard Guidelines

Structure:

  • Use plotly.subplots.make_subplots() for multi-panel layouts
  • Include multiple visualization types: bar, line, scatter, heatmap, pie
  • Save as HTML for full interactivity

Styling:

  • Consistent color scheme (use plotly.express.colors palettes)
  • Clear, descriptive titles for each subplot
  • Proper axis labels with units
  • Legend placement that doesn't obscure data

Interactivity:

  • Enable hover tooltips with relevant data
  • Support zoom/pan for time series
  • Add range sliders for date filtering where appropriate

Example Dashboard Code

import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.express as px

# Create multi-panel dashboard
fig = make_subplots(
    rows=2, cols=2,
    subplot_titles=('Requests by Endpoint', 'Error Rate Over Time',
                    'Response Time Distribution', 'Top Error Types'),
    specs=[[{'type': 'bar'}, {'type': 'scatter'}],
           [{'type': 'histogram'}, {'type': 'pie'}]]
)

# Add traces to each subplot
fig.add_trace(
    go.Bar(x=endpoints, y=counts, marker_color=px.colors.qualitative.Set2),
    row=1, col=1
)

# ... add more traces ...

fig.update_layout(
    height=800,
    showlegend=True,
    title_text='Logfire Telemetry Dashboard',
    title_x=0.5
)

fig.write_html('dashboard.html')
print('Dashboard saved to dashboard.html')

Common Query Patterns

Find Exceptions

SELECT exception_type, exception_message, count(*) as count
FROM records
WHERE exception_type IS NOT NULL
  AND start_timestamp > now() - INTERVAL '24 hours'
GROUP BY exception_type, exception_message
ORDER BY count DESC
LIMIT 20

Trace Latency Analysis

SELECT span_name,
       avg(duration) as avg_duration,
       percentile_cont(0.95) WITHIN GROUP (ORDER BY duration) as p95
FROM records
WHERE start_timestamp > now() - INTERVAL '1 hour'
GROUP BY span_name
ORDER BY avg_duration DESC

Service Dependencies

SELECT service_name, span_name, count(*) as calls
FROM records
WHERE start_timestamp > now() - INTERVAL '1 hour'
GROUP BY service_name, span_name
ORDER BY calls DESC

Output

  • Save dashboards as .html files for interactivity
  • Include the file path in your response so the user can open it
  • For quick insights, print summary statistics to stdout

Related Skills

Xlsx

Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas

data

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Clickhouse Io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

datacli

Analyzing Financial Statements

This skill calculates key financial ratios and metrics from financial statement data for investment analysis

data

Data Storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

data

Kpi Dashboard Design

Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.

designdata

Dbt Transformation Patterns

Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.

testingdocumenttool

Sql Optimization Patterns

Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.

designdata

Anndata

This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.

arttooldata

Xlsx

Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.

tooldata

Skill Information

Category:Data
Last Updated:1/21/2026