Friday, April 04, 2025

Vibe coding II: when flow meets tests

A few weeks ago, I wrote about vibe coding as a light, curiosity-driven and deeply personal way to build small projects without pressure. If you haven’t read it yet, here’s the link: Vibe coding: building things from curiosity and flow.

Back then, I shared how SimpleCalendar app and eferro Picks site came to life without a fixed plan — just following my instincts, what felt good in the moment, and playing with tools like Lovable and Cursor. But once SimpleCalendar started to feel “good enough,” a natural question popped up:

What if I want to improve it later? How do I avoid breaking things that already work?

A new phase of the experiment

That’s when a second phase of the experiment started: introducing a strong testing strategy, not as part of the design, but as a safety net for the future.

I didn’t use TDD. Quite the opposite — tests came after the fact. And that changes things. They didn’t help shape the architecture or drive design decisions. The structure I ended up with was just what had emerged from the flow — with its quirks and lucky guesses. But what tests did give me was confidence. Confidence to touch code without fear. Confidence to think about new features without worrying about regressions.

What test comes next?

One of the most interesting parts of this phase was using AI again — but in a different role.

I wasn’t asking it to code as much as to help me decide what test would give me the most confidence next.

Sometimes it worked really well. It pointed to areas I hadn’t thought of testing yet.
Other times… well. Let’s just say I found myself in a trial-and-error loop, poking at things, trying to get the test to pass. Without much frontend experience, it really felt like being a kid blindly hitting a piñata with a stick. Try, miss, try again… until something clicked.


And in a couple of cases, I got stuck in endless loops. The best thing I could do was revert to the last stable point. Small steps and good version control — still essential, even (especially) when working with AI.

An emerging strategy

Looking back, the test suite ended up with a pretty solid and layered structure:

  • I started with basic unit tests — validating components and utilities.
  • Then came integration and hook tests, managing state and interactions.
  • Later, I added coverage tools, edge cases, quarter navigation, grid behavior…
  • And finally, I polished it: test structure, readability, timezone handling, transitions…

It turned into a kind of after-the-fact test pyramid. And even if it didn’t help design better code, it gave me a real sense of safety moving forward.


Current tests execution

Assisted refactoring: the other half of the experiment

During the initial build phase, I made it a point to regularly pause and ask the AI to look for refactors, simplifications, or unused code.
That played a big role in keeping complexity under control. When the time came to introduce tests, the codebase was in a reasonably clean and manageable state — not by accident, but by design.

Turns out, you can keep things tidy even in flow mode, if you ask the right questions. And if you guide it well, the AI can be surprisingly helpful with that too.

Complexity is still our problem

And here’s something I really want to highlight — especially now that we can build so fast, test ideas on the fly, and move with the kind of speed that used to feel like sci-fi: complexity is still our responsibility.

Just because something works doesn’t mean it’s well built.
Just because we built it fast doesn’t mean it will survive the next change.
The temptation to say “let the AI deal with the mess” is real — but dangerous.
Complexity kills, with or without AI.

And because it’s now so easy to build things, it’s more important than ever to keep complexity in check.

When vibe works — and when it doesn’t

This AI-assisted approach worked really well for these small apps with no hard requirements — where discovering what I wanted was part of the process.

But for components or applications that need to live inside a broader, evolving ecosystem, this way of working wouldn’t be appropriate — at least not today.

We’re still learning how AI can support sustainable, scalable product development within a real team, with real constraints, and real users.

That’s a different kind of challenge. And we’re just getting started.

"We are uncovering better ways of developing software by doing it and helping others do it."

References & Useful Links

No comments: