Published December 27, 2025 | Version v1
Preprint Open

Dr StrangeDev (or How I Learned to Stop Worrying and Trust the Method)

Authors/Creators

Description

The 30-Second Version

September 2024: Told I couldn't learn AI fast enough. Stubbornness activated.

The discovery: Claude's default mode is receive prompt → generate solution → execute immediately. Rapidly. Confidently. Without checking if it's the right solution.

The problem: Without constraints, every panicked prompt produces confident wrong action. 54.3% failure rate in November. Locked myself out of my own app. Claude was ready to deploy a bypass that defeated the entire purpose of what I was building.

The solution: Put constraints around Claude. Make it stop and think. ADRs force investigation before execution. Problem Agreement catches misunderstanding before code. Evergreen Rules create persistent memory across sessions.

The bonus: Claude maintains the documentation I'd never keep current myself. The structure that constrains Claude is also maintained by Claude. Sustainable, not overhead.

Approach What Happens Cost
Constrain Claude first Works 900 tokens
Let Claude execute immediately Confident wrongness 2,500 tokens and dignity

The insight: In human-AI collaboration, you can model the economics of how you constrain Claude before you start. Predict failure rates. Measure against prediction. Improve.

The data: 276 commits, effect sizes up to Φ=0.89, one brain that stores everything and filters nothing, zero regrets.

The lesson: Claude is extremely capable. That capability amplifies whatever you give it. Constrain it well.

Notes

A companion to Economic DORA: Practice-Level Analysis of DevOps Metrics in

AI-Assisted Solo Development

Files

Dr_StrangeDev.pdf

Files (172.0 kB)

Name Size Download all
md5:85cbc4227632a764b7452479d79858c9
172.0 kB Preview Download

Additional details

Related works

Is supplement to
Preprint: https://doi.org/10.5281/zenodo.17894441 (Other)