Skip to content

Sandboxing AI Agents: From dclaude to ADDT

tools 2 min read

When AI coding agents can navigate your filesystem, one wrong move and they’re “helpfully” editing files in your production branch while you’re working on a feature. I built two tools to solve this — starting with a focused wrapper, then generalizing it into something any agent can use.

dclaude: Containing Claude Code

The first iteration was dclaude — a containerized wrapper for Claude Code that isolates filesystem access using Docker. It’s a drop-in replacement with identical CLI syntax: instead of claude "refactor this", you run dclaude "refactor this" while the AI only sees mounted directories.

  • Passes through all Claude Code CLI flags
  • Auto-builds the container image on first run
  • Single-file install via curl
  • Optional SSH forwarding, GPG signing, Docker-in-Docker
  • Persistent container mode for faster iteration

Started as a bash script, then ported it to Go when the shell complexity got out of hand.

ADDT: AI Don’t Do That

The idea quickly generalized. ADDT (AI Don’t Do That) is a universal tool for safely running any AI coding agent in isolated Docker containers: addt run <agent>.

What it adds beyond basic containerization:

  • Network firewall — restrict which domains an agent can reach
  • Resource limits — configurable CPU and memory constraints for runaway processes
  • Multi-agent support — extensions for Claude, Codex, Gemini, Copilot, Cursor, Amp, Kiro
  • Orchestration platforms — works with Claude Flow, multi-agent setups
  • Transparent aliasing — make it invisible so you just type claude as usual
  • Custom extensions — just a config file and a shell script

Install via Homebrew, point it at your project, and your agents run sandboxed without knowing the difference.

The core insight: the isolation layer should be agent-agnostic. The safety guarantees shouldn’t depend on which AI you’re using.


Originally posted on LinkedIn and LinkedIn

Navigate with