Jupyter Notebooks
by pacphi
Comprehensive Jupyter notebook operations - create, execute, and analyze notebooks with full cell manipulation
Skill Details
Repository Files
3 files in this skill directory
name: Jupyter Notebooks description: Comprehensive Jupyter notebook operations - create, execute, and analyze notebooks with full cell manipulation
Jupyter Notebooks Skill
This skill provides complete Jupyter notebook interaction capabilities through MCP server integration, enabling notebook-based data science and research workflows.
Capabilities
- Create and manage Jupyter notebooks (.ipynb files)
- Execute cells and entire notebooks
- Read and write cell content (code and markdown)
- Access cell outputs and execution results
- Manipulate notebook structure (add, delete, move cells)
- Cell-level operations with execution state tracking
- Support for JupyterLab and Jupyter Notebook interfaces
- Integration with Python data science stack (NumPy, Pandas, PyTorch, etc.)
When to Use This Skill
Use this skill when you need to:
- Create interactive computational notebooks
- Run data analysis workflows
- Execute machine learning experiments
- Generate reproducible research documents
- Visualize data with matplotlib/seaborn
- Prototype code interactively
- Create tutorial or educational notebooks
- Document analysis procedures with code + narrative
Prerequisites
- Jupyter notebooks installed (
jupyterandjupyterlabavailable in /opt/venv) - MCP server running on stdio
- Python virtual environment at /opt/venv with data science packages
Available Operations
Notebook Management
create_notebook- Create new notebook with optional cellslist_notebooks- List all notebooks in directoryget_notebook_info- Get metadata and structure infodelete_notebook- Remove notebook file
Cell Operations
add_cell- Add code or markdown cell at positiondelete_cell- Remove cell by indexmove_cell- Reorder cellsget_cell- Read cell content and metadataupdate_cell- Modify cell content
Execution
execute_cell- Run specific cell and capture outputexecute_notebook- Run entire notebook sequentiallyclear_outputs- Clear all cell outputsrestart_kernel- Restart notebook kernel
Content Access
get_all_cells- Read all cells in notebookget_output- Access cell execution resultsexport_notebook- Convert to HTML, PDF, or Python script
Instructions
Creating a New Notebook
To create a notebook for data analysis:
- Use
create_notebookwith file path - Optionally provide initial cells (imports, setup)
- Notebook created with nbformat 4.x schema
Example cells structure:
[
{
"cell_type": "code",
"source": "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt"
},
{
"cell_type": "markdown",
"source": "# Data Analysis\n\nThis notebook analyzes..."
}
]
Executing Notebooks
For data processing pipelines:
- Use
execute_notebookfor full run - Or
execute_cellfor incremental execution - Outputs captured with display data, errors, and execution counts
PyTorch/ML Workflow
Typical machine learning notebook structure:
- Setup cell: Import torch, torchvision, datasets
- Data cell: Load and preprocess data
- Model cell: Define neural network architecture
- Training cell: Training loop with loss tracking
- Evaluation cell: Test metrics and visualizations
- Export cell: Save model weights
Integration with CUDA
For GPU-accelerated computing:
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = MyModel().to(device)
The skill automatically detects CUDA availability and uses GPU when present.
Environment Variables
JUPYTER_CONFIG_DIR- Jupyter configuration directoryJUPYTER_DATA_DIR- Data files locationJUPYTER_RUNTIME_DIR- Runtime files (kernels, etc.)
Output Formats
Notebooks can be exported to:
- HTML - Static web page with outputs
- PDF - Via LaTeX (requires texlive installation)
- Python - Pure Python script (.py file)
- Markdown - Documentation format
- Slides - Reveal.js presentation
Best Practices
- Cell Organization: Keep cells focused on single tasks
- Markdown Documentation: Use markdown cells for explanations
- Restart & Run All: Test full execution before sharing
- Version Control: Use nbdime for notebook diffs
- Clear Outputs: Clear sensitive data before committing
- Kernel Management: Restart kernel when imports change
Example Workflows
Data Science Pipeline
- Create notebook with data exploration cells
- Execute EDA (exploratory data analysis)
- Add visualization cells
- Run statistical analysis
- Export results to HTML report
Machine Learning Experiment
- Set up experiment notebook
- Load training data
- Define model architecture
- Train with progress tracking
- Evaluate on test set
- Save model and metrics
Research Documentation
- Create markdown cells for methodology
- Add code cells for implementations
- Include result visualizations
- Export to PDF for publication
Error Handling
The skill provides detailed error messages for:
- Kernel execution failures
- Cell syntax errors
- Missing dependencies
- File I/O errors
- nbformat validation issues
Performance Considerations
- Notebooks execute in isolated kernels
- CUDA operations utilize GPU when available
- Large datasets may require memory management
- Long-running cells can be interrupted
- Output size limits may apply
Related Skills
- pytorch-ml - Deep learning workflows
- latex-documents - Scientific paper generation
- data-visualization - Advanced plotting
- cuda-development - GPU programming
Technical Details
- Protocol: Model Context Protocol (MCP) over stdio
- Server: Node.js-based MCP server
- Format: nbformat 4.x JSON schema
- Kernel: IPython kernel with Python 3.x
- Extensions: JupyterLab extensions supported
Troubleshooting
Kernel Not Starting
- Check
/opt/venv/bin/pythonexists - Verify ipykernel installed
- Check kernel specifications:
jupyter kernelspec list
Import Errors
- Activate virtual environment:
source /opt/venv/bin/activate - Install missing packages:
pip install <package> - Verify CUDA installation for GPU packages
Cell Execution Hangs
- Interrupt kernel execution
- Restart kernel
- Check for infinite loops or blocking operations
Configuration
MCP server configuration in ~/.claude/settings.json:
{
"mcpServers": {
"jupyter-notebooks": {
"command": "node",
"args": ["/home/devuser/.claude/skills/jupyter-notebooks/server.js"],
"cwd": "/home/devuser/.claude/skills/jupyter-notebooks"
}
}
}
Notes
- Compatible with Claude Code and other MCP clients
- Supports both JupyterLab and classic Notebook interfaces
- Full compatibility with existing .ipynb files
- Execution state preserved across sessions
- Output includes rich media (images, HTML, LaTeX)
Related Skills
Attack Tree Construction
Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
Grafana Dashboards
Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Matplotlib
Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.
Scientific Visualization
Create publication figures with matplotlib/seaborn/plotly. Multi-panel layouts, error bars, significance markers, colorblind-safe, export PDF/EPS/TIFF, for journal-ready scientific plots.
Seaborn
Statistical visualization. Scatter, box, violin, heatmaps, pair plots, regression, correlation matrices, KDE, faceted plots, for exploratory analysis and publication figures.
Shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Query Writing
For writing and executing SQL queries - from simple single-table queries to complex multi-table JOINs and aggregations
Pydeseq2
Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.
Scientific Visualization
Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.
