Skip to content

/workflow

// watch me work

You can read the principles at /principles, see the artifacts at /builds, and then end up here: the repeatable loop that makes it all real.

How I enter depends on how well I understand the problem. This is what vibe coding looks like after two years of iteration.

high confidence
$ /rpi "add rate limiting"

Full auto. Six phases, zero prompts. The machine runs the whole lifecycle.

medium confidence
$ /research → /plan → ...

Hand-move through each phase. Review research before planning. Steer the plan before cranking.

exploring
$ /brainstorm → /research → ...

Don't even know the shape yet. Brainstorm first, then enter the cycle when the direction is clear.

Same phases either way. The difference is how much I let the machine drive versus steering by hand.

// the methodology

Six phases. Full auto or hand-driven. Same discipline either way.

$ /rpi "add rate limiting to the API"
// that's it. everything below happens automatically.
/researchexplore the codebase, gather context
/plandecompose into issues with dependencies
/pre-mortemmulti-model council validates the plan
/crankparallel workers execute in waves
/vibecouncil validates the implementation
/post-mortemextract learnings into the flywheel
.agents/research/
.agents/plans/
.agents/council/
commits + closes
.agents/vibe/
.agents/learnings/

Every phase writes to a persistent directory. Knowledge accumulates across sessions. Validation gates auto-retry on failure. The whole thing runs hands-free.

// the execution

/crank turns a plan into parallel waves of workers. No manual dispatching.

$ /crank ps-kfi

Takes an epic ID. Spawns workers for each wave. Coordinates dependencies automatically.

Wave 1 — 3 workers in parallel, no blockers
Wave 2 — 2 workers, after Wave 1 completes
Wave 3 — 1 worker, integration + tests
Each worker picks up one issue, implements it, commits, and closes the bead. When all waves complete, /crank reports back to /rpi.

The 40% rule: Keep context under 40% utilization. Above 60%, Claude starts forgetting instructions. Beads issues preserve state when you need to start fresh.More on this →

// the flywheel

Every session makes the next one smarter. Learnings compound automatically.

ao forgeextract learnings from session transcripts
ao poolstage for quality review, promote to knowledge base
ao injectload relevant learnings at session start
decay weighting17%/week

Recent learnings are weighted higher. Stale knowledge fades. Session 50 starts with curated knowledge from all 49 previous sessions — not because the model improved, but because the operational knowledge did.

The insight: It's not agent orchestration — it's context orchestration. Loading the right knowledge at the right time is the whole game.

// the tracking

Three systems work together: beads for issues, .agents/ for knowledge, ao for the flywheel.

beads

Git-backed issue tracking. Survives sessions, crashes, and context resets.

bd ready → unblocked
bd show <id> → details
bd close <id> → done
.agents/

Persistent memory. Research, plans, learnings, patterns — all in git.

research/ → exploration
plans/ → specs
learnings/ → insights
ao

Go CLI — 41 commands for knowledge compounding. The memory half of AgentOps.

ao forge → extract
ao inject → load
ao status → health
$ npx @boshu2/vibe-check
Trust Pass Rate93%
Rework Ratio40%
Fix Spirals0

// the validation

Multi-model councils validate before and after implementation. Not one reviewer — a panel.

/pre-mortem

4 judges review the plan before any code is written. Missing requirements, feasibility, scope, spec completeness. FAIL → re-plan automatically.

/vibe

6 judges review the code after implementation. Quality, security, architecture, complexity, UX. FAIL → re-crank automatically.

The ratchet: Progress is locked permanently. Code is merged. Issues are closed. Learnings are stored. You can't un-ratchet — just like you can't un-deploy a running service.

// the replays

Real commits from this repository showing the methodology in action.

b3dec22Feb 2026

Publish AgentOps launch article with 8 Spider-Verse images

Full /rpi cycle: research competitive landscape → plan article structure → implement with AI-generated visuals → vibe-check quality.

dbbff54Feb 2026

Create ULTIMATE-GO.md — unified Go training (6,379 lines)

/crank with 7 parallel workers across 2 waves. 3 source docs assembled into 1 unified training document.

1aa9c42Feb 2026

Unify site color scheme to monochrome green

/research found 6 color variants → /plan identified 18 files → /crank cleaned all in one wave.

Vibe coding since 2023. 95% success rate when the discipline is followed.

// the toolkit

All open source. The methodology is the value — the tools encode it.

/rpi
lifecycle
beads
tracking
ao
flywheel
/vibe
validation

brew install agentops for the CLI, npx skills for the workflow. Track issues with beads, let the flywheel compound your knowledge. The 12-Factor AgentOps methodology documents why it works.

Read the methodology →

// go deeper

// the evidence

Product Development Case Study

AgentOps framework applied to full-stack product development across 204 documented sessions.

40x
speedup
95%
success rate
204
sessions

Complex features that took weeks now take days. Zero context collapse with the 40% rule enforced throughout. Multi-agent orchestration delivered 3x wall-clock speedup on research phases.

beads on GitHub →12-Factor AgentOps →vibe-check on npm →