Thursday, April 10, 2025

Vibe Coding III: Complexity Creeps—Unless You Don’t Let It

Getting Back to My Picks

Over the past few days, I revisited the development of eferro Picks. The project had been well received, and I’d gotten some interesting feedback, so I decided to give it a proper push. But this time, it wasn’t just another playful exploration. The project had grown in complexity and was starting to demand a more deliberate approach—especially if I wanted it to scale or stay maintainable.

My first step: simplify the code, strip out anything superficial, and wrap everything in tests to make the project more sustainable.

The next step was to process all the feedback I had received about the site and implement some improvements that would make the experience smoother and more intuitive. This included better navigation, clearer tooltips, and a toggle to filter talks that actually had notes. It was a great excuse to turn real user feedback into practical functionality—while continuing the vibe coding experiment in a slightly more purposeful direction.

https://eferro.github.io/eferro-picks-site/

The AI as My Pair Programming Partner (With Superpowers and Flaws)

For this session, I continued my vibe coding experiment, but with a twist. Since my experience with front-end technologies like TypeScript, React, and Tailwind CSS is limited, I focused solely on accepting or rejecting the AI's proposed changes. I didn’t review or modify the code directly—instead, I observed the AI’s behavior and guided its direction from a higher level. It was pure vibe coding: trusting the AI and seeing where it would lead.

I used the Cursor Pro IDE, following the vibe coding rules I outlined in the first post of this series (Vibe coding: building things from curiosity and flow). That meant focusing on flow, intuition, and rapid iteration—even when partnering with an AI. Those original principles shaped how I interacted with it, emphasizing exploration over rigid planning.

In this new phase, the AI stopped being just a tool and started acting more like a pair programming partner. One that—despite chaotic moments—can be surprisingly effective when guided with care and intention. I say “guide” because without boundaries, it tends to suggest overly generic or needlessly complex solutions. Almost like it's channeling the collective ego of every public repo out there.

Left unchecked, the AI naturally gravitates toward generality, abstraction, and unnecessary flexibility—just like many of us do when we’re overthinking or trying to be clever.

This approach helped me refactor with focus, prioritizing simplicity and clarity. But it wasn’t all smooth sailing—and I think it’s important to be transparent about that.

During the process:
  • Two or three times, I had to stop because the AI entered loops it couldn’t escape.
  • Once, it even pushed invalid changes directly to production (I work in trunk).
  • Two or three times, I had to revert committed changes just to get back to a stable state.
  • At least twice, when a test wouldn’t stabilize, the best option was to delete it, move on, and return later with fresh eyes. That worked far better than endless poking.


These moments—while frustrating—reinforced something I already believed: the only way to build sustainably with this much raw power at my fingertips is to work in small, safe steps.


The Power of Small Safe Steps (Now More Than Ever)

One of the clearest takeaways from this session: with this much speed and assistance, working in Small Safe Steps becomes more essential than ever.

It’s valuable to know what I want to achieve, have techniques to move forward in parallel, and break tasks into manageable chunks. In practice, most of these “steps” were completed in sessions of 25 minutes or less. Each one designed to be:
  • Small: a change that takes just a few minutes.
  • Safe: unlikely to break production code or existing tests.
Also, due to my current responsibilities, I can only code in short, scattered bursts. I no longer have the luxury of regular pair or ensemble sessions. That’s why the benefits of small steps are fundamental for me—especially:
  • Interruptibility: I can pause anytime without losing the thread.
  • Safety: Each change is low-risk and easy to roll back.
  • Autonomy: I can keep moving forward, even solo, without creating chaos.


This way of working also offers continuous feedback. Geepaw Hill’s article MMMSS: The Intrinsic Benefit of Steps describes this beautifully. I highly recommend reading the full Many More Much Smaller Steps series—it sticks with you.

Small Improvements, Real Impact

In these sessions, I also tackled some of the feedback I had received.

For example, the note icon next to some talks was unclear. After digging into the data with the AI, we discovered that many of those records didn’t contain actual notes—just whitespace or line breaks. So:
  • I implemented a filter to clean those up.
  • Added a tooltip explaining what the icon meant.
  • And (because experimenting is cheap) added a toggle to filter talks with real notes.
Another piece of feedback was trickier: when users navigated to a talk and returned, they lost their filters and scroll position.

This was tough—mainly due to my lack of frontend experience. The AI kept trying random libraries and approaches, switching strategies mid-flight. It felt like pure trial-and-error with no clear rationale.
Things only improved when I stepped in with a specific suggestion:
“Can you change the approach and just use a specific key for the index (scroll_index) so we can use this key when coming back? This way we avoid the problem of overwriting the key.”
The AI immediately responded:
“Ah, that’s a much simpler approach! Instead of handling different paths and normalizing them, we can:
  • Use a specific key for the index page scroll position
  • Save it when leaving
  • Restore it when returning
  • No need to handle paths or base URLs. Let me update the code.”
This moment captured a recurring pattern: complexity is still my responsibility
Left unchecked, the AI naturally gravitates toward generality, abstraction, and unnecessary flexibility—just like many of us do when we’re overthinking or trying to be clever.

But if I pause, question, and steer things back toward simplicity, the AI follows—with surprising grace.

I can (and must) be the guardian of simplicity—with or without AI.

A Retrospective with the AI (and Tangible Results)

In the middle of all this, I ran a mini-retrospective. I asked the AI:
“We’ve been working together for a while. I think you already know what concerns me. Can you describe our workflow and what behaviors you've changed based on my instructions?”
The AI replied:
“Our workflow has evolved into a highly collaborative process where:
  • We work in small, focused steps
  • Each change is preceded by clear reasoning
  • You maintain control
  • I provide options and wait for your decisions
  • We verify each change before moving forward
  •  Communication is direct and natural
I’ve adapted to:
  • Present one change at a time
  • Show my thought process
  • Ask for confirmation before proceeding
  • Focus on practical, simple solutions
  • Maintain test coverage
  • Keep methods small and focused
  • Would you like me to adjust anything? 😊”
Then I followed up:

“Several times I’ve had to revert your changes because you didn’t run the tests before committing. Sometimes even several commits at once.
  • Why do you think this happened?
  • How could we have avoided it?
  • What have we learned?
  • How should we adapt to keep complexity under control?
The AI responded:
“To improve our process, I suggest:
New workflow steps:
  • Make one small change
  • Run tests immediately
  • Fix issues
  • Get your review
  • Only then commit
  • Never push without your explicit approval
Quality gates:
  • All tests must pass
  • Changes must be minimal
  • Each commit should be independently testable
  • No pushing without verification
Communication improvements:
  • Always report test status
  • Ask before pushing
  • Be explicit about what’s tested
  • Show test results

Would you like me to implement these changes in our workflow? 😊”
And the best part? It followed through. For the rest of the session, it ran tests before suggesting anything and asked for confirmation before every commit or push.

Closing: Being a Responsible Coder, Even When Alone

Will the AI forget all this once I close the IDE? Probably. But I’ve learned that these kinds of iterations and retrospectives have value. Therapeutic for me. Educational for the AI—at least within the session.

This experience reinforces the critical importance of working in Small Safe Steps, especially when collaborating with powerful AI. It’s not just about speed, but about maintaining control and understanding.

Continuous simplification is key. Left unchecked, complexity always grows. I have to actively fight that—and help the AI do the same.

Ultimately, it’s up to me to be the guardian of simplicity.

And given my current role, I don’t get to code as much as I’d like—certainly not in pair or ensemble settings. So this is the closest I get to coding responsibly. Because anyone who knows me knows:
 Leaving me to code alone is rarely a good idea. :)


Previous posts in the series:

Related links:

No comments: