Skip to content

Convergent Architecture: Three Domains, One Principle

February 15, 2026·5 min read
#ai-engineering#architecture#agentops

I arrived at the same answer from three different directions. Then I found the textbook.

Infrastructure taught me: prevention costs 1x, correction costs 100x. Software architecture taught me: passive artifacts can coordinate without a central orchestrator. Agent orchestration taught me: the agent is disposable, but the context compounds.

Three domains. One principle. The intelligence lives in accumulated context, not in any individual execution.


The Infrastructure Path

I spent years managing Kubernetes clusters for DoD and Intel environments. The lesson that stuck wasn't about containers or networking. It was about where the value actually lived.

A running cluster is disposable. You can tear it down and rebuild it in hours. The GitOps manifests, the network policies, the hardened configurations that took months of security reviews. Those are the intelligence. The cluster is the execution. The accumulated configuration is the context.

This changes how you think about failure. A node dies? Replace it. A cluster corrupts? Rebuild it from the manifests. The manifests are the system. The infrastructure is just the current instantiation.

Prevention over correction. Declare the desired state. Let the system converge. The intelligence lives in the declarations, not the running processes.


The Architecture Path

Building tooling across 31 repositories, I kept solving the same coordination problem. Agents needed to share state. The obvious answer was a central orchestrator: some service that tracks who's doing what and routes messages.

The obvious answer was wrong.

What actually worked: passive filesystem artifacts. A .beads-local/ directory in each repo. A .agents/ tree of learnings, plans, and handoffs. No central coordinator. No message bus. Just files that any agent can read and any agent can write.

.agents/
  ├── learnings/     # What we discovered
  ├── plans/         # What we decided
  ├── handoffs/      # What's in progress
  └── retros/        # What we reflected on

The filesystem is the shared memory. Each file is a context artifact that outlives the agent that created it. Agents come and go. The accumulated context stays.

This is the same pattern. The individual agent (execution) is disposable. The .agents/ tree (accumulated context) is the intelligence. New agents start smarter because prior agents left context behind.


The Agent Orchestration Path

Then I started running multi-agent workflows at scale. 159 commits across a single project. Parallel workers spawned per wave. Each worker gets fresh context, executes one issue, and terminates.

The first instinct was to make each agent smarter. Give it more tools, longer context windows, better prompts. Chase the perfect execution.

That instinct was wrong too.

The breakthrough was the opposite: treat agents as unreliable components in a reliable system. Spawn parallel attempts. Filter aggressively. Ratchet progress so it can't regress. Any individual agent can fail. The system still converges.

Wave 1: Spawn 3 workers → 2 succeed, 1 fails
         Commit successes. Ratchet forward.

Wave 2: Spawn 2 workers → Both succeed
         Commit. Ratchet.

Wave 3: Retry failed work with fresh context
         New agent, same accumulated context → succeeds

The worker is disposable. The ratchet is the intelligence. Each wave starts from a higher floor because prior waves committed their progress.


The Convergence

Three different problems. Three different domains. The same architecture every time:

DomainDisposable UnitAccumulated Context
InfrastructureRunning clusterGitOps manifests
ArchitectureIndividual agentFilesystem artifacts
OrchestrationWave workerRatcheted progress

This isn't a coincidence. It's convergent evolution: independent systems arriving at the same solution because the constraints are identical.

The constraint: individual executions are unreliable. Nodes crash. Agents hallucinate. Workers produce garbage. Any system that depends on perfect execution is fragile.

The solution: accumulate context that survives execution failure. Manifests survive cluster death. Filesystem artifacts survive agent termination. Ratcheted commits survive worker failure.

The pattern has a name in biology: convergent evolution. Eyes evolved independently in vertebrates and octopi. Wings evolved independently in birds, bats, and insects. Same constraints, same solution, different lineages.

Same thing happened here. Infrastructure, architecture, and agent orchestration are different lineages. Same constraints. Same solution.


The Technique: Operationalize the Pattern

Once you see the convergence, the technique becomes obvious. Build systems that accumulate context across executions.

Infrastructure: GitOps manifests that survive cluster death. Architecture: .agents/ trees that survive session termination. Orchestration: Ratcheted commits that survive worker failure.

The specific implementation is what I call the Knowledge Flywheel: extract learnings, ratchet progress, inject into the next session. The pattern is domain-agnostic. The principle is: make context outlive execution.

66 sessions across 18 days produced coherent output because no single session needed the full picture. The intelligence was distributed across the accumulated context.


The Story Behind the Math

Early on, most agent executions produced garbage. Spawned, hallucinated, terminated. The instinct was to make each one smarter: bigger models, better prompts, longer context.

The fix was the opposite. Cheaper models. Shorter context. More attempts. But with a ratchet.

The result: 95% of agent executions now produce usable output. Not because individual agents got smarter. Because the system got better at accumulating context and ratcheting progress. A bad execution leaves a learning. A good execution commits and ratchets. Either way, the floor goes up.


So What?

If you're building with AI agents, here's where to start Monday morning:

  1. Create a .agents/learnings/ directory. After every session, success or failure, write what you discovered.
  2. Treat git commits as ratchets, not snapshots. Each commit is a floor that can't drop.
  3. Stop optimizing individual agent runs. Start optimizing what survives between them.

Three domains taught me the same lesson independently. That's how I know it's real.