The competitive edge in AI-native development isn’t the model or the coding agent — it’s the organizational context you feed it. In this conversation with Edgar at the Sonar Summit, I walk through why context is the fuel that makes agents perform, and why the teams that invest in building it systematically are pulling ahead.
The context flywheel
Most teams start with a solo developer writing a CLAUDE.md or rules file to get their agent to behave. That’s the seed. Then they pull in context from other repos, open source libraries, tickets — and it expands. Once one team does this well, others notice and follow. The flywheel kicks in: better context leads to better agent output, which leads to better documentation, which feeds back into even better context. And here’s the side effect nobody expected — the humans benefit too. Up-to-date context written for agents doubles as onboarding material for new hires.
Context development lifecycle
Documentation rots. We’ve always known this. The difference now is people have a selfish incentive to keep it fresh — their agents break when it’s stale. I frame the lifecycle in four steps: generate the context, evaluate it with evals (think behavioral tests for your prompts), distribute it across teams and tools, and observe how it performs in the wild. That last step is key — when 15 developers hit the same agent failure, that’s a signal to update the shared context centrally. It’s the same observe-and-respond loop we built in DevOps, applied to context.
Evals are error budgets, not gates
The question everyone asks is “when are my evals good enough?” The honest answer: treat them like error budgets, not like deterministic tests. Split criteria into must-pass and nice-to-have. Make each eval as binary as possible — “kind of okay” is unmanageable. And layer them like a test pyramid: fast unit-style checks during development, full behavioral suites on deploy. The business has to weigh in on acceptable risk levels, just like they do with SLOs. There’s no magic number.
Where to start
For teams watching from the sidelines: first, check if your developers are writing anything down in agent instruction files. If not, that’s step one. Then look for sharing — is context flowing between repos, teams, tools? A platform team or developer experience team can accelerate this by building a central registry. Even skeptics can contribute: the person who insists the AI is crap probably has the deepest domain knowledge and makes an excellent context author. The smallest step is turning what people already know into something the agent can use.
The real moat
The compounding advantage isn’t in the technical context alone — someone could snapshot your repo and replicate that. The moat is the accumulated business and domain knowledge built up over time across teams. Every iteration through the flywheel deepens that understanding. We went through this before with DevOps and deployment pipelines: the techniques became public knowledge, but the organizations that had built the muscle memory stayed ahead. Same pattern, new domain.
Watch on YouTube — available on the jedi4ever channel
This summary was generated using AI based on the auto-generated transcript.