The story starts with a developer getting a feature request, and their first instinct: npm install. We are so focused on reuse that installing dependencies is second nature. But one dependency pulls in dozens of transitive dependencies, and dependencies are everywhere – runtimes, distributions, SaaS services, build toolchains. Each one introduces risk.
When we evaluate a library, we unconsciously assess four dimensions from the Thin Book of Trust: competence (tests, documentation, issue count), reliability (release cadence, stars, articles), sincerity (changelog transparency, open build process), and care (pull request responsiveness, community engagement, language). These are human assessments of human behaviors, even though we think of them as technical decisions. Every dependency is backed by people making promises within their own delivery pipeline.
The key difference from traditional DevOps thinking is that these external dependencies are outside our control. Promise theory frames this well: every agent makes promises, but a promise does not guarantee an outcome. We depend on others to keep our own promises, but we cannot make promises on their behalf. What we can do is maintain options – multiple libraries, abstraction layers, redundancy.
Technical lag, a metric created by Tom Mens, measures the delta between the ideal version and the current version across time, versions, vulnerabilities, or bugs. The vulnerability budget concept from Pivotal applies this like an SRE error budget – tracking how far behind you are and setting bounds. Escape rates measure how many vulnerabilities get past staging into production, providing a baseline for team improvement over time.
Detection tools keep getting better, but triage is where human judgment becomes essential. The cost of delay framework helps: if you do nothing and nothing is exploited, you saved effort. If the exploit hits, the cost is massive. Urgency profiles matter. Automated fix suggestions with merge advice – showing that many others have successfully merged this patch – give humans confidence to act. But even with all of this automation, CI pipelines build confidence, not trust. Confidence is historical probability; trust is something deeper. John Allspaw’s question – how long can your system survive without human intervention? – exposes the gap. Is human trust enough? No. But in many cases it is all we have, so invest in making your humans trustworthy alongside your automation.
Watch on YouTube – available on the jedi4ever channel
This summary was generated using AI based on the auto-generated transcript.