Cole Medin’s post on comprehension debt got me thinking. The term is making the rounds — it’s the gap between the code your team has shipped and the code your team actually understands. Unlike technical debt — which you can see and plan around — comprehension debt is invisible until something breaks and nobody knows why.
I’ve been calling it the think tax. Every time you accept AI-generated code without building a mental model of what it does, you’re deferring a payment. And unlike technical debt, which compounds linearly, the think tax compounds exponentially. Once you lose mastery of the system’s logic, every subsequent change carries catastrophic risk.
The pattern
Anthropic’s own research found developers using AI assistance scored 17% lower on comprehension when learning new libraries. Debugging skills took the hardest hit. Stack Overflow’s 2025 survey reports 85% of developers now use AI tools. That’s a systemic comprehension gap forming across the industry.
The progression is predictable:
- Days 1-30 — Honeymoon. AI delivers real productivity gains. Engineers still review with full context. Risk feels manageable.
- Days 30-180 — Drift. AI-generated code exceeds human-written code. Review rigor declines. “Probably won’t break” becomes the approval standard. Institutional knowledge starts eroding.
- Day 180+ — The cliff. Code surpasses comprehension capacity. Approvals are based on test results, not understanding. Debugging becomes archaeology.
Why this is different from technical debt
Technical debt is a known tradeoff — you ship something suboptimal and plan to fix it later. Comprehension debt is scarier because the code often works. Tests pass. The PR looks reasonable. But the engineer who approved it can’t explain why it works, what assumptions it makes, or how it interacts with the rest of the system.
As one Reddit commenter put it: “We’re getting correct code, but not right code.”
A freelancer reported deleting 90 of 100 files from an AI-built React app. AI-generated comments, nonsensical algorithms, inconsistent patterns. That’s not technical debt — it’s technical foreclosure.
Cognitive underload, not overload
Most people frame this as information overload — too much code, too fast, can’t keep up. But that’s the wrong lens. This is cognitive underload. The AI handles the hard thinking, so your brain never engages deeply enough to form a mental model. You’re not overwhelmed — you’re underloaded. The muscle atrophies from disuse.
It’s the same pattern everywhere. GPS navigation killed spatial awareness. Autocorrect degraded spelling. Autopilot created pilots who can’t hand-fly in emergencies. The tool removes the cognitive effort, which removes the learning.
The dangerous part is that underload feels great. You’re productive. Things are shipping. The feedback loop is all positive — right up until the moment you need to debug something and realize you have no intuition about how the system behaves. Overload gives you a headache. Underload gives you confidence you haven’t earned.
The accountability gap
“Claude wrote it” isn’t a defense. Engineers remain professionally responsible for every line they approve, regardless of origin. But comprehension debt undermines the very accountability structure we depend on.
This connects directly to the trust and accountability problem I wrote about earlier. You can’t be accountable for code you don’t understand. And you can’t build trust in a system where nobody has a mental model of what’s actually running.
Paying the tax
Don’t treat agents as a substitute for thinking.
Some practical approaches that seem to work:
- The explain test — If you can’t explain what a function does and why, you don’t understand it yet. Don’t merge it.
- Write your own comments — If you can’t comment the code yourself, the think tax is unpaid.
- Refactor immediately — The window of context closes fast. Once the code works, the motivation to understand it drops to zero. Refactor while the context is fresh.
- Track the ratio — Know how much of your codebase is AI-generated vs human-written. When AI-generated code dominates, review rigor needs to increase, not decrease.
The velocity from AI coding tools is real. But velocity without comprehension isn’t velocity — it’s procrastination with extra steps.
Community discussion
Lakshmi Narasimhan framed it as “the invisible tax you pay when you vibe code.” Alok Nandan mapped out the three-stage progression from honeymoon to cliff. The pattern recognition across these independent observations is striking — everyone is arriving at the same conclusion from different angles.
The Context Development Lifecycle is one answer to this. If the context going into agents is better governed, the output is more predictable, and the think tax gets lower. But it doesn’t disappear. You still have to understand what you’re shipping.