I’m Maxwell Collins. I run Code-Rescue from Tampa, Florida. For the past few months I’ve been on an active rebuild for a private client. Six products, one shared backend. Solo. Using Claude Code as the entire engineering team.
I’m not a programmer in the classical sense. I started teaching myself to code at 12 (RuneScape private servers, Java and C++), studied economics in college, ran construction projects, sold Mercedes for a year, started and closed a construction firm, and ended up back in software after a spinal injury made physical work impossible. What I’m good at is architecting systems and verifying mechanically that the system does what the specification says. I’m worst at writing code myself. What I’ve actually gotten good at is operating an engineering team of AI agents, with enough enforcement around them that the code they produce survives due diligence.
DRDD is what I call the way I work. The short version: if you can’t write an assertion for a specification, you don’t have a specification yet. Rules get enforced at every boundary (write-time, commit-time, run-time), not left to the discipline of whoever reads the prose. Source code has no authority. When the rule and the code disagree, the code is wrong.
The current rebuild runs on 42 rule files, about 3,500 rules across three layers (spec, consistency, adversarial), and a five-layer enforcement stack: 48 hookify rules, 41 ast-grep patterns, 26 lifecycle hooks, 20 custom skills, and 5 custom review agents. Every piece of that exists because a specific thing broke. The worktree disaster on March 19, 2026 cost me 20 hours of work and 20-plus commits, and is why git worktree is blocked in three different ways now. The 4,800 fake-tests weekend on March 30, 2026 is why every it() block in this codebase has to call a real procedure or the gate rejects it. The Great Purge on February 26, 2026, when 184 AI-generated architecture documents were found to contradict the actual source code, is why every claim in the rebuild now has to trace to a specific file or rule ID.
I’m publishing because I’ve been wrong enough times in private to want to be wrong in public instead. Vague methodology doesn’t survive being written for a reader; half my rules got sharpened the first time I tried to explain them to someone who wasn’t me. The other reason is that the AI-augmented software industry is currently chasing bigger prompts and smarter models, which is the wrong problem. Mechanical boundaries are what actually make AI-generated code safe to ship.
What you’ll find here: the manifesto, the rules as I write them (origin, text, testable assertion, enforcement, violation closed), and field notes when something I did produced a failure worth learning from. What you won’t find: case studies with client names attached, hot takes on model releases, AI hype.
Contact
max@code-rescue.com. I read everything. I respond to most of it, slowly.