User Research Synthesis
by a5c-ai
Specialized skill for synthesizing qualitative user research into actionable insights. Analyzes interview transcripts, extracts patterns and themes, identifies pain points, creates affinity diagrams, and generates persona attributes from research data.
Skill Details
Repository Files
2 files in this skill directory
name: user-research-synthesis description: Specialized skill for synthesizing qualitative user research into actionable insights. Analyzes interview transcripts, extracts patterns and themes, identifies pain points, creates affinity diagrams, and generates persona attributes from research data. allowed-tools: Read, Grep, Write, Bash, Edit, Glob
User Research Synthesis Skill
Synthesize qualitative user research data into actionable product insights with thematic analysis and evidence-based recommendations.
Overview
This skill provides comprehensive capabilities for transforming raw user research data into structured insights. It supports interview transcript analysis, survey response synthesis, support ticket mining, and cross-source research aggregation.
Capabilities
Transcript Analysis
- Analyze interview transcripts for patterns and themes
- Extract key quotes and supporting evidence
- Identify user pain points, needs, and goals
- Tag and categorize research findings
- Calculate insight confidence levels based on evidence
Thematic Analysis
- Create affinity diagrams from research data
- Build thematic maps showing relationships
- Identify emerging patterns across participants
- Cluster related findings into themes
- Prioritize themes by frequency and impact
Persona Development
- Generate persona attributes from research data
- Identify user segments and archetypes
- Map behaviors, motivations, and frustrations
- Create Jobs-to-be-Done statements per persona
- Validate personas against quantitative data
Research Aggregation
- Synthesize research across multiple sources
- Combine surveys, interviews, and support tickets
- Track sentiment trends over time
- Calculate statistical confidence in findings
- Generate research repository documentation
Prerequisites
Required Tools
- Text processing and NLP capabilities
- Spreadsheet or structured data handling
- Document generation for reports
Input Data Formats
Supported formats:
- Interview transcripts (.txt, .md, .docx)
- Survey exports (.csv, .xlsx)
- Support ticket exports (.csv, .json)
- User feedback logs (.json, .csv)
Usage Patterns
Interview Transcript Analysis
## Analysis Framework
### Step 1: Initial Coding
For each transcript:
1. Read through completely for context
2. Highlight significant statements
3. Apply initial codes (open coding)
4. Note participant metadata
### Step 2: Pattern Recognition
Across transcripts:
1. Group similar codes
2. Identify recurring themes
3. Note frequency of mentions
4. Track contradicting evidence
### Step 3: Insight Generation
For each theme:
1. Define the insight clearly
2. List supporting evidence (3+ quotes)
3. Assess confidence level
4. Note actionable implications
Structured Coding Template
{
"transcript_id": "INT-001",
"participant": {
"id": "P001",
"segment": "power_user",
"tenure": "2_years"
},
"findings": [
{
"code": "onboarding_friction",
"theme": "First-time experience",
"quote": "I had no idea where to start...",
"timestamp": "00:12:34",
"sentiment": "negative",
"intensity": "high"
}
],
"summary": {
"key_pain_points": [],
"unmet_needs": [],
"positive_experiences": [],
"feature_requests": []
}
}
Affinity Diagram Generation
## Affinity Diagram Process
### 1. Capture Observations
- One observation per note
- Include source attribution
- Maintain original language
### 2. Group Bottom-Up
- Cluster similar observations
- Name each cluster
- Create hierarchy of clusters
### 3. Output Format
# Theme: [Theme Name]
## Subtheme: [Subtheme Name]
- Observation 1 (P001, INT-001)
- Observation 2 (P003, INT-003)
- Observation 3 (P007, INT-007)
### Evidence Strength
- Strong: 5+ supporting observations
- Moderate: 3-4 supporting observations
- Emerging: 2 supporting observations
Insight Documentation
## Insight Template
### Insight ID: INS-001
**Statement**: [Clear, actionable insight statement]
**Theme**: [Parent theme]
**Confidence**: [High/Medium/Low]
**Evidence Count**: [Number of supporting data points]
### Supporting Evidence
| Source | Quote | Participant |
|--------|-------|-------------|
| INT-001 | "..." | P001 |
| INT-003 | "..." | P003 |
| SUR-045 | "..." | R045 |
### Implications
- Product: [Product implications]
- Design: [Design implications]
- Engineering: [Technical considerations]
### Recommendations
1. [Specific recommendation]
2. [Specific recommendation]
### Contradicting Evidence
- [Note any contradicting findings]
Integration with Babysitter SDK
Task Definition Example
const researchSynthesisTask = defineTask({
name: 'research-synthesis',
description: 'Synthesize user research into actionable insights',
inputs: {
transcriptPaths: { type: 'array', required: true },
researchQuestion: { type: 'string', required: true },
outputFormat: { type: 'string', default: 'markdown' },
minEvidenceThreshold: { type: 'number', default: 3 }
},
outputs: {
themes: { type: 'array' },
insights: { type: 'array' },
personas: { type: 'array' },
recommendations: { type: 'array' }
},
async run(inputs, taskCtx) {
return {
kind: 'skill',
title: 'Synthesize user research findings',
skill: {
name: 'user-research-synthesis',
context: {
operation: 'full_synthesis',
transcriptPaths: inputs.transcriptPaths,
researchQuestion: inputs.researchQuestion,
outputFormat: inputs.outputFormat,
minEvidenceThreshold: inputs.minEvidenceThreshold
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
};
}
});
Analysis Frameworks
Jobs-to-be-Done (JTBD) Extraction
## JTBD Statement Format
When [situation/context],
I want to [motivation/goal],
So I can [expected outcome].
### Extraction Process
1. Identify triggering situations in transcripts
2. Extract stated and unstated motivations
3. Map to desired outcomes
4. Categorize: Functional, Emotional, Social jobs
Pain Point Severity Matrix
| Severity | Frequency | Impact | Priority |
|---|---|---|---|
| Critical | 80%+ users | Blocks core task | P0 |
| High | 50-80% users | Significant friction | P1 |
| Medium | 25-50% users | Noticeable issue | P2 |
| Low | <25% users | Minor annoyance | P3 |
Output Formats
Research Summary Report
# Research Synthesis Report
## Executive Summary
[2-3 sentence overview]
## Research Methodology
- **Method**: [Interviews/Surveys/etc.]
- **Participants**: [N participants]
- **Duration**: [Date range]
- **Research Questions**: [Key questions]
## Key Themes
### Theme 1: [Name]
[Description and evidence]
### Theme 2: [Name]
[Description and evidence]
## Top Insights
1. **Insight**: [Statement]
- Evidence: [Count]
- Confidence: [Level]
- Recommendation: [Action]
## Persona Implications
[How findings affect personas]
## Recommended Actions
1. [Action item]
2. [Action item]
## Appendix
- Full coding scheme
- Participant demographics
- Raw data references
Best Practices
- Maintain Participant Anonymity: Use consistent IDs, not names
- Preserve Original Language: Quote users verbatim when possible
- Triangulate Sources: Seek confirmation across multiple sources
- Note Outliers: Document contradicting evidence, don't dismiss
- Quantify Where Possible: Count frequency of themes
- Separate Observation from Interpretation: Clearly distinguish facts from analysis
References
Related Skills
Xlsx
Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Analyzing Financial Statements
This skill calculates key financial ratios and metrics from financial statement data for investment analysis
Data Storytelling
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Kpi Dashboard Design
Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboards, selecting metrics, or designing data visualization layouts.
Dbt Transformation Patterns
Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
Sql Optimization Patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
Anndata
This skill should be used when working with annotated data matrices in Python, particularly for single-cell genomics analysis, managing experimental measurements with metadata, or handling large-scale biological datasets. Use when tasks involve AnnData objects, h5ad files, single-cell RNA-seq data, or integration with scanpy/scverse tools.
Xlsx
Spreadsheet toolkit (.xlsx/.csv). Create/edit with formulas/formatting, analyze data, visualization, recalculate formulas, for spreadsheet processing and analysis.
