
Coding agents can write features and fix bugs. The bottleneck has shifted — from how fast we can write code to how effectively we communicate what we actually want. Context is the new constraint.
Most teams store context informally: .cursorrules files, scattered .md documents, Slack threads, tribal knowledge that lives in people’s heads. None of it is versioned. None of it is tested. None of it has conflict detection. Context rots and conflicts — outdated guidance actively misleads agents without anyone noticing.
The Context Development Lifecycle (CDLC) applies software engineering discipline to the problem.
Four stages
Generate — Convert implicit organizational knowledge into structured specifications agents can act on. Technical context (coding standards, patterns), project context (timelines, priorities), and business context (customer expectations, compliance requirements). The stuff that’s currently locked in people’s heads or buried in wikis nobody reads.
Evaluate — Test context through scenarios, like test-driven development but for specifications. Define scenarios and evaluate whether the agent’s output matches your intent. If you wouldn’t ship untested code, why ship untested context?
Distribute — Treat context as versioned, published packages with supply chain security. Knowledge needs to scale across organizations the same way dependencies do — with integrity guarantees and update mechanisms.
Observe — Learn from agent behavior in production. Identify gaps where context was insufficient, spot drift where reality diverged from assumptions, and refine iteratively. Close the feedback loop.
The DevOps parallel
This mirrors the cultural shift DevOps brought. Before DevOps, dev and ops had misaligned incentives — dev wanted to ship fast, ops wanted stability. The breakthrough wasn’t better tools, it was aligning goals.
The CDLC creates a similar alignment: better context yields better agent output, which gives everyone motivation to invest in knowledge sharing. The people closest to the domain knowledge finally have a direct path to improving what gets built.
Bigger context windows won’t save you
The temptation is to think infinite context windows solve this. They don’t. Throwing more tokens at an agent without governance is like giving a junior developer access to the entire codebase without onboarding. Volume isn’t the problem — quality, relevance, and freshness are.
Context is the new bottleneck, not code.
Community reactions
The LinkedIn discussion surfaced some sharp observations:
- The chicken-and-egg problem — Ruslan Vlasyuk questioned what comes first: the tooling or the methodology. You can’t adopt a lifecycle if the tools don’t exist yet, but nobody builds tools for a methodology no one follows.
- Documentation’s revenge — Richard Gross pointed out the irony: after 60 years of developers avoiding documentation, AI agents are forcing us to finally write it down. The difference is the incentive now — bad docs mean bad output, immediately and visibly.
- This isn’t new — Tom Klaasen connected the CDLC to Peter Naur’s 1985 paper “Programming as Theory Building.” The idea that programming is fundamentally about building shared understanding, not writing code, has been around for decades. AI just made the cost of ignoring it obvious.
- Early tooling — Sylvain Hellegouarch shared unlost, a tool for helping agents recall context through memory capsules. The ecosystem is starting to form.
Read the full article on tessl.io.