/principles
// patterns for AI reliability
// start here
This page stands on its own: the principles I use to ship AI systems that stay reliable in the real world.
Pick your depth:
// the 40% rule
Never exceed 40% of your AI's context window.
Below 40%, performance is linear. Above 40%, failures compound exponentially. This is the most important rule I've discovered in two years of AI development.
// the flywheel
Every AI session produces byproducts—decisions, patterns, learnings. Most people let these evaporate. I capture and compound them.
Citations track what knowledge actually helped. Frequently-cited patterns get promoted. The system gets smarter with every use.
This is the moat. Models commoditize. Knowledge compounds.
// convergent evolution
I built the flywheel through 18 months of empirical optimization—trial and error, measuring what worked, throwing out what didn't. Success rate went from 35% to 95%. Then I found an academic paper describing the same dynamics I'd been building by hand.
MemRL formalized reinforcement learning on episodic memory. What I called “tier × citations × freshness” they called a Q-value. What I called the Brownian Ratchet—failures that teach more than some successes—they called near-miss learning. Different starting points, same destination.
Experimental physics finds what works. Theoretical physics proves why. The flywheel isn't a heuristic anymore. It's convergent with the math.
// more principles
The agent is throwaway. The accumulated context is the intelligence.
Infrastructure, architecture, and orchestration all say the same thing. The full story →
You can't build until you've cleared what doesn't belong.
Proven 6x across code, writing, and docs. The constraint that forces removal is a better dead-code detector than any audit.
Value nobody can see is indistinguishable from value that doesn't exist.
The fix isn't self-promotion. It's systems where work can't be invisible.
Many short sessions = still deciding. Few long sessions = building.
Session volume measures cognitive load, not throughput. The highest-session days produce the fewest commits.
// the code foundry
TSMC wins semiconductors through yield optimization, not better silicon. I apply the same thinking to AI-generated code.
// staying reliable
Fast doesn't mean sloppy, and ten years in infrastructure taught me that much.
// the difference
- → Hours in flow
- → Pattern recognition
- → Optimization loops
- → Coordinating with people
- → Output: leaderboards
- → Hours in flow
- → Pattern recognition
- → Optimization loops
- → Coordinating with AI
- → Output: actual things
// the bliss
“Follow your bliss and the universe will open doors where there were only walls.”
— Joseph Campbell
Campbell undersold it. The walls aren't opening; they're dissolving entirely. Every direction is a door now, and behind each one is a skill tree I didn't know existed.
// the grind
I'm grinding myself like a WoW character, and ten years of infrastructure gave me the base stats. Now I'm speed-running the rest: TypeScript, ML pipelines, RAG systems, and prompt engineering. Each skill unlocks three more.
Neo downloads kung fu in seconds, and I'm not that fast, but vibe coding compressed years into months. The feedback loop is so tight that learning feels like remembering.
The feedback loop is so tight that learning feels like remembering.
// the before
WoW mythic+ and mythic raiding, League TFT, pushing keys until 3am and min-maxing builds. That "one more run" feeling where you look up and it's 4am. You know it.
Dark Souls and Elden Ring hit different though, because those aren't about optimization. They're about being human: you die a hundred times with no shortcut, just you and the boss and slowly getting better. Humility through repetition.
I led a guild for years, calling pulls, explaining fights, and keeping twenty people from wiping. It was actually fulfilling work.
But at the end of the day it was all leaderboards, achievements, and numbers that reset next season.
// what happened
It started when ChatGPT dropped. I spent two years on LLM copy-paste workflows, building systems and figuring out what works, though it was scattered at first.
In September I started testing every coding agent I could find, and in October I went deep. That's when it all clicked together: the copy-paste stuff, the agentic workflows, and the patterns I'd been building for two years solidified into something real.
Now when I look up at 4am, there's a thing: a system that works, a tool someone can use. It's the same flow state as gaming, but the output is real.
Claude helped me get stuff out of my head and into reality. It's not autocomplete; it's a thinking partner who can actually build the thing you're trying to describe.
Since then I've been going nonstop trying to answer one question: how do we make AI as reliable as infrastructure?
Now when I look up at 4am, there's a thing: a system that works, a tool someone can use.
// why it matters
I read Life 3.0 in 2017, where Tegmark writes about different kinds of intelligence working together. That idea stuck with me for years.
Now I'm actually living it. Human brain and AI brain coordinate together, with each filling gaps the other has.
It feels like a Gutenberg moment.
Before the printing press, books lived in monasteries, and after, anyone could read. We're hitting that inflection with intelligence itself. Building things and solving problems isn't gatekept anymore, and anyone willing to learn these tools can do it.
// where this goes
I call it Knowledge OS: an operating system for the mind. The next step is to scale it so that every mind works together, remembers together, and thinks together, achieving things none of us could alone.
That's the actual vision, not productivity hacks but what this enables for people.
I got so hooked that I'm now running internal AI adoption at work, building an agentic marketplace and trying to help others see what I see.
// try it
If you game, you already have the skills.
Pattern recognition, systems thinking, and staying in flow for hours: that's literally what this needs. The only question is what you point it at.