Oracle
by harivansh-afk
Deep planning via Oracle CLI (GPT-5.2 Codex). Use for complex tasks requiring extended thinking (10-60 minutes). Outputs plan.md for planner to transform into specs.
Skill Details
Repository Files
1 file in this skill directory
name: oracle description: Deep planning via Oracle CLI (GPT-5.2 Codex). Use for complex tasks requiring extended thinking (10-60 minutes). Outputs plan.md for planner to transform into specs.
Oracle
Oracle bundles your prompt + codebase files into a single request for GPT-5.2 Codex. Use it when planning is complex and requires deep, extended thinking.
When to Use Oracle
| Trigger | Why |
|---|---|
| 5+ specs needed | Complex dependency management |
| Unclear dependency graph | Needs analysis |
| Architecture decisions | Extended thinking helps |
| Migration planning | Requires careful sequencing |
| Performance optimization | Needs deep code analysis |
| Any planning >10 minutes | Offload to Codex |
When NOT to Use Oracle
- Simple 1-2 spec tasks
- Clear, linear implementations
- Bug fixes
- Quick refactors
Prerequisites
Oracle CLI installed:
npm install -g @steipete/oracle
Or use npx:
npx -y @steipete/oracle --help
Workflow
Step 1: Craft the Prompt
Write to /tmp/oracle-prompt.txt:
Create a detailed implementation plan for [TASK].
## Context
- Project: [what the project does]
- Stack: [frameworks, languages, tools]
- Location: [key directories and files]
## Requirements
[ALL requirements gathered from human]
- [Requirement 1]
- [Requirement 2]
- Features needed:
- [Feature A]
- [Feature B]
- NOT needed: [explicit out-of-scope]
## Plan Structure
Output as plan.md with this structure:
# Plan: [Task Name]
## Overview
[Summary + recommended approach]
## Phase N: [Phase Name]
### Task N.M: [Task Name]
- Location: [file paths]
- Description: [what to do]
- Dependencies: [task IDs this depends on]
- Complexity: [1-10]
- Acceptance Criteria: [specific, testable]
## Dependency Graph
[Which tasks run parallel vs sequential]
## Testing Strategy
[What tests prove success]
## Instructions
- Write complete plan to plan.md
- Do NOT ask clarifying questions
- Be specific and actionable
- Include file paths and code locations
Step 2: Preview Token Count
npx -y @steipete/oracle --dry-run summary --files-report \
-p "$(cat /tmp/oracle-prompt.txt)" \
--file "src/**" \
--file "!**/*.test.*" \
--file "!**/*.snap" \
--file "!node_modules" \
--file "!dist"
Target: <196k tokens
If over budget:
- Narrow file selection
- Exclude more test/build directories
- Split into focused prompts
Step 3: Run Oracle
npx -y @steipete/oracle \
--engine browser \
--model gpt-5.2-codex \
--slug "vertical-plan-$(date +%Y%m%d-%H%M)" \
-p "$(cat /tmp/oracle-prompt.txt)" \
--file "src/**" \
--file "convex/**" \
--file "!**/*.test.*" \
--file "!**/*.snap" \
--file "!node_modules" \
--file "!dist"
Why browser engine:
- GPT-5.2 Codex runs take 10-60 minutes (normal)
- Browser mode handles long runs
- Sessions stored in
~/.oracle/sessions - Can reattach if timeout
Step 4: Monitor
Tell the human:
Oracle is running. This typically takes 10-60 minutes.
I will check status periodically.
Check status:
npx -y @steipete/oracle status --hours 1
Step 5: Reattach (if timeout)
If the CLI times out, do NOT re-run. Reattach:
npx -y @steipete/oracle session <session-id> --render > /tmp/oracle-result.txt
Step 6: Read Output
Oracle writes plan.md to current directory. Read it:
cat plan.md
Step 7: Transform to Specs
Convert Oracle's phases/tasks → spec YAML files:
| Oracle Output | Spec YAML |
|---|---|
| Phase N | Group of related specs |
| Task N.M | Individual spec file |
| Dependencies | pr.base field |
| Location | building_spec.files |
| Acceptance Criteria | verification_spec |
File Attachment Patterns
Include:
--file "src/**"
--file "prisma/**"
--file "convex/**"
Exclude:
--file "!**/*.test.*"
--file "!**/*.spec.*"
--file "!**/*.snap"
--file "!node_modules"
--file "!dist"
--file "!build"
--file "!coverage"
--file "!.next"
Default ignored: node_modules, dist, coverage, .git, .turbo, .next, build, tmp
Size limit: Files >1MB are rejected
Prompt Templates
For Authentication
Create a detailed implementation plan for adding authentication.
## Context
- Project: [app name]
- Stack: Next.js, Prisma, PostgreSQL
- Location: src/pages/api/ for API, src/components/ for UI
## Requirements
- Methods: Email/password + Google OAuth
- Roles: Admin and User
- Features: Password reset, email verification
- NOT needed: 2FA, SSO
## Plan Structure
[standard structure]
For API Development
Create a detailed implementation plan for building a REST API.
## Context
- Project: [app name]
- Stack: [framework]
- Location: src/api/ for routes
## Requirements
- Resources: [entities]
- Auth: [method]
- Rate limiting: [yes/no]
- NOT needed: [out of scope]
## Plan Structure
[standard structure]
For Migration
Create a detailed implementation plan for migrating [from] to [to].
## Context
- Current: [current state]
- Target: [target state]
- Constraints: [downtime, rollback needs]
## Requirements
- Data to migrate: [what]
- Dual-write period: [yes/no]
- Rollback strategy: [required]
## Plan Structure
[standard structure]
Important Rules
- One-shot execution - Oracle doesn't interact, just outputs
- Always gpt-5.2-codex - Use Codex model for coding tasks
- File output: plan.md - Always outputs to current directory
- Don't re-run on timeout - Reattach to session instead
- Use --force sparingly - Only for intentional duplicate runs
After Oracle Runs
- Read
plan.md - Review phases and tasks
- Present breakdown to human for approval
- Transform to spec YAMLs
- Continue planner workflow
Related Skills
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Clinical Decision Support
Generate professional clinical decision support (CDS) documents for pharmaceutical and clinical research settings, including patient cohort analyses (biomarker-stratified with outcomes) and treatment recommendation reports (evidence-based guidelines with decision algorithms). Supports GRADE evidence grading, statistical analysis (hazard ratios, survival curves, waterfall plots), biomarker integration, and regulatory compliance. Outputs publication-ready LaTeX/PDF format optimized for drug develo
Geopandas
Python library for working with geospatial vector data including shapefiles, GeoJSON, and GeoPackage files. Use when working with geographic data for spatial analysis, geometric operations, coordinate transformations, spatial joins, overlay operations, choropleth mapping, or any task involving reading/writing/analyzing vector geographic data. Supports PostGIS databases, interactive maps, and integration with matplotlib/folium/cartopy. Use for tasks like buffer analysis, spatial joins between dat
Mermaid Diagrams
Comprehensive guide for creating software diagrams using Mermaid syntax. Use when users need to create, visualize, or document software through diagrams including class diagrams (domain modeling, object-oriented design), sequence diagrams (application flows, API interactions, code execution), flowcharts (processes, algorithms, user journeys), entity relationship diagrams (database schemas), C4 architecture diagrams (system context, containers, components), state diagrams, git graphs, pie charts,
Datacommons Client
Work with Data Commons, a platform providing programmatic access to public statistical data from global sources. Use this skill when working with demographic data, economic indicators, health statistics, environmental data, or any public datasets available through Data Commons. Applicable for querying population statistics, GDP figures, unemployment rates, disease prevalence, geographic entity resolution, and exploring relationships between statistical entities.
Clickhouse Io
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Matlab
MATLAB and GNU Octave numerical computing for matrix operations, data analysis, visualization, and scientific computing. Use when writing MATLAB/Octave scripts for linear algebra, signal processing, image processing, differential equations, optimization, statistics, or creating scientific visualizations. Also use when the user needs help with MATLAB syntax, functions, or wants to convert between MATLAB and Python code. Scripts can be executed with MATLAB or the open-source GNU Octave interpreter
Geopandas
Python library for working with geospatial vector data including shapefiles, GeoJSON, and GeoPackage files. Use when working with geographic data for spatial analysis, geometric operations, coordinate transformations, spatial joins, overlay operations, choropleth mapping, or any task involving reading/writing/analyzing vector geographic data. Supports PostGIS databases, interactive maps, and integration with matplotlib/folium/cartopy. Use for tasks like buffer analysis, spatial joins between dat
Dask
Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.
