Controlled unease with AI coding

When driving AI agents to write code, I noticed a new feeling I've been trying to put into words. I finally have a name for it: "controlled unease while vibe coding." It hits me whenever I let AI write big chunks of code.

The AI gets to work. Lines of code appear. Too fast to read everything. Seems OK, I guess. Then it generates more. "I would definitely have written that differently." It keeps going. Ugh, a pointless null check, needless complexity. But I know the code will work. The AI pours out more code. It's finally done. I run git diff. I skim the changes. It's not great, but the code looks fine. I test the application by hand. I click around: all working. I make a mental note of where the skeletons are buried. And I move on. I merge to main.

This approach is ridiculously effective. You ship features at insane speed. Obviously, I'd never do this for mission-critical software. But for cranking out features that might not even belong in the product after all? Fantastic.

The unease is always there, even if it's controlled. The aesthete in me cringes at the code. The nagging feeling never goes away: what if something catastrophic happens? Or (a bit less dramatic): what if the AI programmed me into a corner that will be tough to get out of in the future? Sometimes I scrap everything the AI wrote. Other times I've fed error after error back to the AI, refusing to admit I should've just coded it myself from the start. Just one more error message, I need to see this thing run until the end before I tear it down and rebuild.

The more you use these tools, the better you tame that unease. The more you prompt, the better you learn when to trust the vibes. With practice, the unease becomes manageable. You learn how to structure your files. You know which tasks the AI excels at. And you get excited about shipping at this velocity. Experience transforms unease into calculated risk. It feels more controlled. And you hand the AI bigger chunks of work.

Eventually you might embrace the new rhythm: let the AI build fast, accumulate technical debt in controlled areas, then go on targeted refactoring sprees when the unease becomes unbearable. I have found, more often than not, that "terrible" AI code outlives your initial forecast.

Ironic postscript: I tried writing this post with AI and failed bigtime. I tried o3 and ChatGPT 4.5, but it wasn't good. The AI was only helpful as a sounding board, summarizing the initial braindump that became this article.