Skip to content

/principles

// patterns for AI reliability

// the 40% rule

Never exceed 40% of your AI's context window.

Below 40%, performance is linear. Above 40%, failures compound exponentially. This is the most important rule I've discovered in two years of AI development.

// the flywheel

Every AI session produces byproducts—decisions, patterns, learnings. Most people let these evaporate. I capture and compound them.

Citations track what knowledge actually helped. Frequently-cited patterns get promoted. The system gets smarter with every use.

This is the moat. Models commoditize. Knowledge compounds.

// convergent evolution

I built the flywheel through 18 months of empirical optimization—trial and error, measuring what worked, throwing out what didn't. Success rate went from 35% to 95%. Then I found an academic paper describing the same dynamics I'd been building by hand.

MemRL formalized reinforcement learning on episodic memory. What I called “tier × citations × freshness” they called a Q-value. Different starting points, same destination.

The execution model is the Brownian Ratchet: embrace variance (parallel agents), filter aggressively (validation gates), and lock progress permanently (merge to main). Chaos in, quality out. The ratchet only turns forward.

Experimental physics finds what works. Theoretical physics proves why. The flywheel isn't a heuristic anymore. It's convergent with the math.

// more principles

DISPOSABLE AGENTS

The agent is throwaway. The accumulated context is the intelligence.

Infrastructure, architecture, and orchestration all say the same thing. The full story →

REMOVAL BEFORE CLARITY

You can't build until you've cleared what doesn't belong.

Proven 6x across code, writing, and docs. The constraint that forces removal is a better dead-code detector than any audit.

VISIBILITY IS STRATEGY

Value nobody can see is indistinguishable from value that doesn't exist.

The fix isn't self-promotion. It's systems where work can't be invisible.

DECISION DENSITY

Many short sessions = still deciding. Few long sessions = building.

Session volume measures cognitive load, not throughput. The highest-session days produce the fewest commits.

// the code foundry

TSMC wins semiconductors through yield optimization, not better silicon. I apply the same thinking to AI-generated code.

Yield = First-try pass rate
Process = Validation gates
Throughput = Parallel workers
Knowledge = Compounding wisdom

// staying reliable

Fast doesn't mean sloppy, and ten years in infrastructure taught me that much.

Full methodology →

// the difference

GAMING
  • → Hours in flow
  • → Pattern recognition
  • → Optimization loops
  • → Coordinating with people
  • → Output: leaderboards
VIBE CODING
  • → Hours in flow
  • → Pattern recognition
  • → Optimization loops
  • → Coordinating with AI
  • → Output: actual things
It's the same high with a different artifact. →

// the bliss

“Follow your bliss and the universe will open doors where there were only walls.”

— Joseph Campbell

Campbell undersold it. The walls aren't opening; they're dissolving entirely. Every direction is a door now, and behind each one is a skill tree I didn't know existed.

// the grind

I'm grinding myself like a WoW character, and ten years of infrastructure gave me the base stats. Now I'm speed-running the rest: TypeScript, ML pipelines, RAG systems, and prompt engineering. Each skill unlocks three more.

Neo downloads kung fu in seconds, and I'm not that fast, but vibe coding compressed years into months. The feedback loop is so tight that learning feels like remembering.

// ---

// try it

If you game, you already have the skills.

Pattern recognition, systems thinking, and staying in flow for hours: that's literally what this needs. The only question is what you point it at.

// go deeper