/workflow
// watch me work
You can read the principles at /principles, see the artifacts at /builds, and then end up here: the repeatable loop that makes it all real.
How I enter depends on how well I understand the problem. This is what vibe coding looks like after two years of iteration.
Full auto. Six phases, zero prompts. The machine runs the whole lifecycle.
Hand-move through each phase. Review research before planning. Steer the plan before cranking.
Don't even know the shape yet. Brainstorm first, then enter the cycle when the direction is clear.
Same phases either way. The difference is how much I let the machine drive versus steering by hand.
// the methodology
Six phases. Full auto or hand-driven. Same discipline either way.
Every phase writes to a persistent directory. Knowledge accumulates across sessions. Validation gates auto-retry on failure. The whole thing runs hands-free.
// the execution
/crank turns a plan into parallel waves of workers. No manual dispatching.
Takes an epic ID. Spawns workers for each wave. Coordinates dependencies automatically.
The 40% rule: Keep context under 40% utilization. Above 60%, Claude starts forgetting instructions. Beads issues preserve state when you need to start fresh.More on this →
// the flywheel
Every session makes the next one smarter. Learnings compound automatically.
Recent learnings are weighted higher. Stale knowledge fades. Session 50 starts with curated knowledge from all 49 previous sessions — not because the model improved, but because the operational knowledge did.
The insight: It's not agent orchestration — it's context orchestration. Loading the right knowledge at the right time is the whole game.
// the tracking
Three systems work together: beads for issues, .agents/ for knowledge, ao for the flywheel.
Git-backed issue tracking. Survives sessions, crashes, and context resets.
Persistent memory. Research, plans, learnings, patterns — all in git.
Go CLI — 41 commands for knowledge compounding. The memory half of AgentOps.
// the validation
Multi-model councils validate before and after implementation. Not one reviewer — a panel.
4 judges review the plan before any code is written. Missing requirements, feasibility, scope, spec completeness. FAIL → re-plan automatically.
6 judges review the code after implementation. Quality, security, architecture, complexity, UX. FAIL → re-crank automatically.
The ratchet: Progress is locked permanently. Code is merged. Issues are closed. Learnings are stored. You can't un-ratchet — just like you can't un-deploy a running service.
// the replays
Real commits from this repository showing the methodology in action.
b3dec22Feb 2026Publish AgentOps launch article with 8 Spider-Verse images
Full /rpi cycle: research competitive landscape → plan article structure → implement with AI-generated visuals → vibe-check quality.
dbbff54Feb 2026Create ULTIMATE-GO.md — unified Go training (6,379 lines)
/crank with 7 parallel workers across 2 waves. 3 source docs assembled into 1 unified training document.
1aa9c42Feb 2026Unify site color scheme to monochrome green
/research found 6 color variants → /plan identified 18 files → /crank cleaned all in one wave.
Vibe coding since 2023. 95% success rate when the discipline is followed.
// the toolkit
All open source. The methodology is the value — the tools encode it.
brew install agentops for the CLI, npx skills for the workflow. Track issues with beads, let the flywheel compound your knowledge. The 12-Factor AgentOps methodology documents why it works.
Read the methodology →// go deeper
// the evidence
Product Development Case Study
AgentOps framework applied to full-stack product development across 204 documented sessions.
Complex features that took weeks now take days. Zero context collapse with the 40% rule enforced throughout. Multi-agent orchestration delivered 3x wall-clock speedup on research phases.