Monday, November 03, 2025

When AI Makes Good Practices Almost Free

Since I started working with AI agents, I've had a feeling that was hard to explain. It wasn't so much that AI made work faster or easier, but something harder to pin down: the impression that good practices were much easier to apply and that most of the friction to introduce them had disappeared. That many things that used to require effort, planning, and discipline now happened almost frictionlessly.

That intuition had been haunting me for weeks, until this week, in just three or four days, two very concrete examples put it right in front of me.

The Small Go Application

This week, a colleague reached out to tell me that one of the applications I had implemented in Go didn't follow the team's architecture and testing conventions. They were absolutely right: I hadn't touched Go in years and, honestly, I didn't know the libraries we were using. So I did what I could, leaning heavily on AI to get a quick first version as a proof of concept to validate an idea.
The thing is, my colleague sent me a link to a Confluence page with documentation about architecture and testing, and also a link to another Go application I could use as a reference.

A few months ago, changing the entire architecture and testing libraries would have been at least a week of work. Probably more. But in this case, with AI, I had it completely solved in just two or three hours. Almost without realizing it.

I downloaded the reference application and asked the AI to read the Confluence documentation, analyze the reference application, and generate a transformation plan for my application. Then I just asked it to apply the plan, no adjustments needed, just small interactions to decide when to make commits or approve some operations. In just over two hours, and barely paying attention, I had the entire architecture changed to hexagonal and all the tests updated to use other libraries. It felt almost effortless.

It was a small app, maybe 2000 to 3000 lines of code and around 50 tests, but still, without AI, laziness would have won and I would have only done it if it had been absolutely essential.

The cost of keeping technical coherence across applications has dropped dramatically. What used to take serious effort now happens almost by itself.

The Testing That Stopped Hurting

A few days later, I encountered another similar situation, this time in Python. Something was nagging at me: some edge cases weren't well covered by the tests. I decided to use mutmut, a mutation testing library I'd tried years ago but usually skipped because the juice rarely seemed worth the squeeze.

This time I threw in the library, got it configured in minutes with AI's help, and then I basically went on autopilot: I simply generated the mutations and told the AI to go, one by one, analyzing the mutations and creating or modifying the necessary tests to cover those cases. This process required almost no effort from me. The AI was doing all the heavy lifting. I just prioritized a few cases and gave the tests a quick once-over, simply to check that they followed the style of the others.

In a couple of hours, the change in feeling was complete. Night and day. My confidence in the project's tests had shot up and the effort? Practically nothing.

The Intuition That Became Visible

These two examples, almost back-to-back, confirmed the intuition I had been carrying since I started working with AI agents: the economy of effort is changing. Radically.

Refactoring, keeping things coherent, writing solid tests, documenting decisions... None of that matters less now. What has changed is its cost. And when the cost drops to nearly zero, the excuses should vanish too.

If time and effort aren't the issue anymore, why do we keep falling into the same traps? Why do we keep piling on debt and complexity we don't need?

Perhaps the problem isn't technical. Perhaps the problem is that many teams have never really seen what sustainable code looks like, have never experienced it firsthand. They've lived in the desert so long they've forgotten what a forest looks like. Or maybe they never knew in the first place.

Beth Andres-Beck and Kent Beck use the forest and desert metaphor to talk about development practice. The forest has life, diversity, balance. The desert? Just survival and scarcity.

For years I've worked in the forest. I've lived it. I know it's possible, I know it works, and I know it's the right way to develop software. But I also know that building and maintaining that forest was an expensive discipline. Very expensive. It took mature teams, time, constant investment, and a company culture that actually supported it.

Now, with AI and modern agents, building that forest costs almost the same as staying in the desert. The barrier has dropped dramatically. The barrier isn't effort or time anymore. It's just deciding to do it and knowing how.

The question I'm left with is no longer whether it's possible to build sustainable software. I've known that for years. The question is: now that the cost has disappeared, will we actually seize this opportunity? Will we see more teams moving into that forest that used to be out of reach?