Páginas

Saturday, October 11, 2025

Good talks/podcasts (Oct I) / AI & AI - Augmented Coding Edition!

These are the best podcasts/talks I've seen/listened to recently: Reminder: All of these talks are interesting, even just listening to them.

You can now explore all recommended talks and podcasts interactively on our new site: The new site allows you to:
  • 🏷️ Browse talks by topic
  • 👤 Filter by speaker
  • 🎤 Search by conference
  • 📅 Navigate by year
Feedback Welcome!
Your feedback and suggestions are highly appreciated to help improve the site and content. Feel free to contribute or share your thoughts!
Related:

Friday, October 10, 2025

AI and Lean Software Development: Reflections from Experimentation

Exploring how artificial intelligence might be changing the rules of the game in software development - preliminary insights from the trenches

An exploration into uncharted territory

I want to be transparent from the start: what I'm about to share are not definitive conclusions or proven principles. These are open reflections that emerge from a few months of intense personal experimentation with AI applied to software development, exploring its possibilities and trying to understand how this affects the Lean practices we typically use.

These thoughts are not definitive conclusions based on prolonged experience, but rather open reflections that I would like to continue experimenting with and discussing with others interested in this fascinating topic. I'm not speaking as someone who already has the answers, but as someone exploring fascinating questions and suspecting that we're facing a paradigm shift that we're only beginning to understand.

The fundamental paradox: speed versus validation

A central idea I'm observing is that, although artificial intelligence allows us to work faster, this doesn't mean we should automatically expand the initial scope of our functionalities. My intuition tells me that we should continue delivering value in small increments, validate quickly, and decide based on real feedback rather than simply on the speed at which we can now execute tasks.

But there's an interesting nuance I've started to consider: in low-uncertainty contexts, where both the value and implementation are clear and the team is very confident, it might make sense to advance a bit more before validating. However, my feelings lead me to think that maintaining discipline to avoid falling into speculative design is fundamental, because although AI facilitates it, it can jeopardize the simplicity and future flexibility of the system.

The cognitive crisis we don't see coming

Chart: While development speed with AI grows exponentially, our human cognitive capacity remains constant, creating a "danger zone" where we can create complexity faster than we can manage it.

Here I do have a conviction that becomes clearer every day: we should now be much more radical when it comes to deleting and eliminating code and functionalities that aren't generating the expected impact.

What this visualization shows me is something I feel viscerally: we have to be relentless to prevent complexity from devouring us, because no matter how much AI we have, human cognitive capacity hasn't changed - both for managing technical complexity and for users to manage the growing number of applications and functionalities.

We're at that critical point where the blue line (AI speed) crosses the red line (our capacity), and my intuition tells me that either we develop radical disciplines now, or we enter that red zone where we create more complexity than we can handle.

The paradox of amplified Lean

But here's the crux of the matter, and I think this table visualizes it perfectly:

Table: AI eliminates the natural constraints that kept us disciplined (at least some of us), creating the paradox that we need to artificially recreate those constraints through radical discipline.

This visualization seems to capture something fundamental that I'm observing: AI eliminates the natural constraints that kept us applying Lean principles. Before, the high cost of implementation naturally forced us to work in small batches. Now we have to recreate that discipline artificially.

For example, look at the "Small Batches" row: traditionally, development speed was the natural constraint that forced us to validate early. Now, with AI, that brake disappears and we risk unconscious scope growth. The countermeasure isn't technical, it's cultural: explicitly redefining what "small" means in terms of cognitive load, not time.

The same happens with YAGNI: before, the high cost of implementation was a natural barrier against speculative design. Now AI "suggests improvements" and makes overengineering tempting and easy. The answer is to make YAGNI even more explicit.

This is the paradox that fascinates me most: we have to become more disciplined precisely when technology makes it easier for us.

From this general intuition, I've identified several specific patterns that concern me and some opportunities that excite me. These are observations that arise from my daily experimentation, some clearer than others, but all seem relevant enough to share and continue exploring.

About scope and complexity

Change in the "default size" of work

AI facilitates the immediate development of functionalities or refactors, which can unconsciously lead us to increase their size. The risk I perceive is losing the discipline of small batch size crucial for early validation.

Ongoing exploration: My intuition suggests explicitly redefining what "small" means in an AI context, focused on cognitive size and not just implementation time. One way to achieve this is by relying on practices like BDD/ATDD/TDD to limit each cycle to a test or externally validable behavior.

Amplified speculative design

On several occasions I've had to undo work done by AI because it tries to do more than necessary. I've observed that AI lacks sensitivity to object-oriented design and has no awareness of the complexity it generates, creating it very quickly until reaching a point where it can't escape and enters a loop, fixing one thing and breaking others.

Reflection: This suggests reinforcing deliberate practices like TDD, walking skeletons, or strict feature toggles.

New type of "overengineering"

My initial experience suggests that the ease AI offers can lead to adding unnecessary functionalities. It's not the classic overengineering of the architect who designs a cathedral when you need a cabin. It's more subtle: it's adding "just one more feature" because it's easy, it's creating "just one additional abstraction" because AI can generate it quickly.

Key feeling: Reinforcing the YAGNI principle even more explicitly seems necessary.

About workflow and validations

Differentiating visible work vs. released work

My experience indicates that rapid development shouldn't confuse "ready to deploy" with "ready to release." My feeling is that keeping the separation between deployment and release clear remains fundamental.

I've also developed several times small functionalities that then weren't used. Although, to be honest, since I have deeply internalized eliminating waste and baseline cost, I simply deleted the code afterwards.

Opportunity I see: AI can accelerate development while we validate with controlled tests like A/B testing.

More work in progress, but with limits

Although AI can allow more parallel work, my intuition tells me this can fragment the team's attention and complicate integration. It's tempting to have three or four features "in development" simultaneously because AI makes them progress quickly.

My current preference: Use AI to reduce cycle time per story, prioritizing fast feedback, instead of parallelizing more work.

Change in the type of mistakes we make

My observations suggest that with AI, errors can propagate quickly, generating unnecessary complexity or superficial decisions. A superficial decision or a misunderstanding of the problem can materialize into functional code before I've had time to reflect on whether it's the right direction.

Exploration: My intuition points toward reinforcing cultural and technical guardrails (tests, decision review, minimum viable solution principle).

About culture and learning

Impact on culture and learning

I feel there's a risk of over-relying on AI, which could reduce collective reflection. Human cognitive capacity hasn't changed, and we're still better at focusing on few things at a time.

Working intuition: AI-assisted pair programming, ownership rotations, and explicit reviews of product decisions could counteract this effect.

Ideas I'm exploring to manage these risks

After identifying these patterns, the natural question is: what can we do about it? The following are ideas I'm exploring, some I've already tried with mixed results, others are working hypotheses I'd like to test. To be honest, we're in a very embryonic phase of understanding all this.

Discipline in Radical Elimination My intuition suggests introducing periodic "Deletion Reviews" to actively eliminate code without real impact. Specific sessions where the main objective is to identify and delete what isn't generating value.

"Sunset by Default" for experiments The feeling is that we might need an explicit automatic expiration policy for unvalidated experiments. If they don't demonstrate value in X time, they're automatically eliminated, no exceptions.

More rigorous Impact Tracking My experience leads me to think about defining explicit impact criteria before writing code and ruthlessly eliminating what doesn't meet expectations in the established time.

Fostering a "Disposable Software" Mentality My feeling is that explicitly labeling functionalities as "disposable" from the start could psychologically facilitate elimination if they don't meet expectations.

Continuous reduction of "AI-generated Legacy" I feel that regular sessions to review automatically generated code and eliminate unnecessary complexities that AI introduced without us noticing could be valuable.

Radically Reinforcing the "YAGNI" Principle My intuition tells me we should explicitly integrate critical questions in reviews to avoid speculative design: "Do we really need this now? What evidence do we have that it will be useful?"

Greater rigor in AI-Assisted Pair Programming My initial experience suggests promoting "hybrid Pair Programming" to ensure sufficient reflection and structural quality. Never let AI make architectural decisions alone.

A fascinating opportunity: Cross Cutting Concerns and reinforced YAGNI

Beyond managing risks, I've started to notice something promising: AI also seems to open new possibilities for architectural and functional decisions that traditionally had to be anticipated from the beginning.

I'm referring specifically to elements like:

  • Internationalization (i18n): Do we really need to design for multiple languages from day one?
  • Observability and monitoring: Can we start simple and add instrumentation later?
  • Compliance: Is it possible to build first and adapt regulations later?
  • Horizontal scalability and adaptation to distributed architectures: Can we defer these decisions until we have real evidence of need?

My feeling is that these decisions can be deliberately postponed and introduced later thanks to the automatic refactoring capabilities that AI seems to provide. This could further strengthen our ability to apply YAGNI and defer commitment.

The guardrails I believe are necessary

For this to work, I feel we need to maintain certain technical guardrails:

  • Clear separation of responsibilities: So that later changes don't break everything
  • Solid automated tests: To refactor with confidence
  • Explicit documentation of deferred decisions: So we don't forget what we deferred
  • Use of specialized AI for architectural spikes: To explore options when the time comes

But I insist: these are just intuitions I'd like to validate collectively.

Working hypotheses I'd love to test

After these months of experimentation, these are the hypotheses that have emerged and that I'd love to discuss and test collectively:

1. Speed ≠ Amplitude

Hypothesis: We should use AI's speed to validate faster, not to build bigger.

2. Radical YAGNI

Hypothesis: If YAGNI was important before, now it could be critical. Ease of implementation shouldn't justify additional complexity.

3. Elimination as a central discipline

Hypothesis: Treat code elimination as a first-class development practice, not as a maintenance activity.

4. Hybrid Pair Programming

Hypothesis: Combining AI's speed with human reflection could be key. Never let AI make architectural decisions alone.

5. Reinforced deployment/release separation

Hypothesis: Keep this separation clearer than ever. Ease of implementation could create mirages of "finished product."

6. Deferred cross-cutting concerns

Hypothesis: We can postpone more architectural decisions than before, leveraging AI's refactoring capabilities.

An honest invitation to collective learning

Ultimately, these are initial ideas and reflections, open to discussion, experimentation, and learning. My intuition tells me that artificial intelligence is radically changing the way we develop products and software, enhancing our capabilities, but suggesting the need for even greater discipline in validation, elimination, and radical code simplification.

My strongest hypothesis is this: AI amplifies both our good and bad practices. If we have the discipline to maintain small batches, validate quickly, and eliminate waste, AI could make us extraordinarily effective. If we don't have it, it could help us create disasters faster than ever.

But this is just a hunch that needs validation.

What experiences have you had? Have you noticed these same patterns, or completely different ones? What practices are you trying? Have you noticed these same effects in your teams? What feelings does the integration of AI in your Lean processes generate for you?

We're in the early stages of understanding all this. We need perspectives from the entire community to navigate this change that I sense could be paradigmatic, but that I still don't fully understand.

Let's continue the conversation. The only way forward is exploring together.

Do these reflections resonate with you? Have you noticed similar or completely different patterns? I'd love to hear about your experience and continue learning together in this fascinating and still unexplored territory.

Sunday, October 05, 2025

Lean + XP + Product Thinking – Three Pillars for Sustainable Software Development

When we talk about developing software sustainably, we're not just talking about taking care of the code or going at a reasonable pace. We're referring to the ability to build useful products, with technical quality, in a continuous flow, without burning out the team and without making the system collapse with every change. It's a difficult balance. However, in recent years I've been seeing how certain approaches have helped us time and again to maintain it.

These aren't new or particularly exotic ideas. But when combined well, they can make all the difference. I'm referring to three concrete pillars: Extreme Programming (XP) practices, Lean thinking applied to software development, and a product mindset that drives us to build with purpose and understand the "why" behind every decision.

Curiously, although sometimes presented as different approaches, XP and Lean Software Development have very similar roots and objectives. In fact, many of Lean's principles—such as eliminating waste, optimizing flow, or fostering continuous learning—are deeply present in XP's way of working. This is no coincidence: Kent Beck, creator of XP, was one of the first to apply Lean thinking to software development, even before it became popular under that name. As he himself wrote:

"If you eliminate enough waste, soon you go faster than the people who are just trying to go fast." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

This quote from Kent Beck encapsulates the essence of efficiency through the elimination of the superfluous.

I don't intend to say this is the only valid way to work, nor that every team has to function this way. But I do want to share that, in my experience, when these three pillars are present and balanced, it's much easier to maintain a sustainable pace, adapt to change, and create something that truly adds value. And when one is missing, it usually shows.

This article isn't a recipe, but rather a reflection on what we've been learning as teams while building real products, with long lifecycles, under business pressure and with the need to maintain technical control—a concrete way that has worked for us to do "the right thing, the right way… and without waste."

Doing the right thing, the right way… and without waste

There's a phrase I really like that well summarizes the type of balance we seek: doing the right thing, the right way. This phrase has been attributed to Kent Beck and has also been used by Martin Fowler in some contexts. In our experience, this phrase falls short if we don't add a third dimension: doing it without waste, efficiently and smoothly. Because you can be doing the right thing, doing it well, and still doing it at a cost or speed that makes it unsustainable.



Over the years, we've seen how working this way—doing the right thing, the right way and without waste—requires three pillars:

  • Doing the right thing implies understanding what problem needs to be solved, for whom, and why. And this cannot be delegated outside the technical team. It requires that those who design and develop software also think about product, impact, and business. This is what in many contexts has been called Product Mindset: seeing ourselves as a product team, where each person acts from their discipline, but always with a product perspective.
  • Doing it the right way means building solutions that are maintainable, testable, that give us confidence to evolve without fear and that do so at a sustainable pace, respecting people. This is where Extreme Programming practices come into full play.
  • And doing it without waste leads us to optimize workflow, eliminate everything that doesn't add value, postpone decisions that aren't urgent, and reduce the baseline cost of the system. Again, much of Lean thinking helps us here.

These three dimensions aren't independent. They reinforce each other. When one fails, the others usually suffer. And when we manage to have all three present, even at a basic level, that's when the team starts to function smoothly and with real impact.

The three pillars

Over time, we've been seeing that when a team has these three pillars present—XP, Lean Thinking, and Product Engineering—and keeps them balanced, the result is a working system that not only functions, but endures. It endures the passage of time, strategy changes, pressure peaks, and difficult decisions.

1. XP: evolving without breaking

Extreme Programming practices are what allow us to build software that can be changed. Automated tests, continuous integration, TDD, simple design, frequent refactoring… all of this serves a very simple idea: if we want to evolve, we need very short feedback cycles that allow us to gain confidence quickly.

With XP, quality isn't a separate goal. It's the foundation upon which everything else rests. Being able to deploy every day, run experiments, try new things, reduce the cost of making mistakes… all of that depends on the system not falling apart every time we touch something.

"The whole organization is a quality organization." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

I remember, at Alea, changing the core of a product (fiber router provisioning system) in less than a week, going from working synchronously to asynchronously. We relied on the main tests with business logic and gradually changed all the component's entry points, test by test. Or at The Motion, where we changed in parallel the entire architecture of the component that calculated the state and result of the video batches we generated, so it could scale to what the business needed.

Making these kinds of changes in a system that hadn't used good modern engineering practices (XP/CD) would have been a nightmare, or even completely discarded, opting for patch upon patch until having to declare technical bankruptcy and rebuild the system from scratch.

However, for us, thanks to XP, it was simply normal work: achieving scalability improvements or adapting a component to manufacturer changes. Nothing exceptional.

None of this would be possible without teams that can maintain a constant pace over time, because XP doesn't just seek to build flexible and robust systems, but also to care for the people who develop them.

XP not only drives the product's technical sustainability, but also a sustainable work pace, which includes productive slack to be creative, learn, and innovate. It avoids "death marches" and heroic efforts that exhaust and reduce quality. Kent Beck's 40-hour work week rule reflects a key idea: quality isn't sustained with exhausted teams; excessive hours reduce productivity and increase errors.

2. Lean Thinking: focus on value and efficiency

Lean thinking gives us tools to prioritize, simplify, and eliminate the unnecessary. It reminds us that doing more isn't always better, and that every line of code we write has a maintenance cost. Often, the most valuable thing we can do is build nothing at all.

We apply principles like eliminating waste, postponing decisions until the last responsible moment (defer commitment), measuring flow instead of utilization, or systematically applying YAGNI. This has allowed us to avoid premature complexities and reduce unnecessary work.

In all the teams I've worked with, we've simplified processes: eliminating ceremonies, working in small and solid steps, dispensing with estimates and orienting ourselves to continuous flow. Likewise, we've reused "boring" technology before introducing new tools, and always sought to minimize the baseline cost of each solution, eliminating unused functionalities when possible.

I remember, at Alea, that during the first months of the fiber router provisioning system we stored everything in a couple of text files, without a database. This allowed us to launch quickly and migrate to something more complex only when necessary. Or at Clarity AI, where our operations bot avoided maintaining state by leveraging what the systems it operates (like AWS) already store and dispensed with its own authentication and authorization system, using what Slack, its main interface, already offers.

These approaches have helped us focus on the essential, reduce costs, and maintain the flexibility to adapt when needs really require it.

3. Product Mindset: understanding the problem, not just building the solution

And finally, the pillar that's most often forgotten or mentally outsourced: understanding the problem.
As a product team, we can't limit ourselves to executing tasks from our disciplines; we need to get involved in the impact of what we build, in the user experience, and in the why of each decision.

When the team assumes this mindset, the way of working changes completely. The dividing line between "business" and "technology" disappears, and we start thinking as a whole. It doesn't mean everyone does everything, but we do share responsibility for the final result.

In practice, this implies prioritizing problems before solutions, discarding functionalities that don't add value even if they're already planned, and keeping technical options open until we have real data and feedback. Work is organized in small and functional vertical increments, delivering improvements almost daily to validate hypotheses with users and avoid large deliveries full of uncertainty. Thanks to this, this approach allows adapting critical processes in just a few hours to changes in requirements or context, without compromising stability or user experience.

Not all pillars appear at once

One of the things I've learned over time is that teams don't start from balance. Sometimes you inherit a team with a very solid technical level, but with no connection to the product. Other times you arrive at a team that has good judgment about what to build, but lives in a trench of impossible-to-maintain code. Or the team is so overwhelmed by processes and dependencies that it can't even get to production smoothly.

The first thing, in those cases, isn't to introduce a methodology or specific practice. It's to understand. See which of the pillars is weakest and work on improving it until, at least, it allows you to move forward. If the team can't deploy without suffering, it matters little that they perfectly understand the product. If the team builds fast but what they make is used by no one, the problem is elsewhere.

Our approach has always been to seek a certain baseline balance, even at a very initial level, and from there improve on all three pillars at once. In small steps. Without major revolutions.

The goal isn't to achieve perfection in any of the three, but to prevent any one from failing so badly that the team gets blocked or frustrated. When we manage to have all three reasonably present, improvement feeds back on itself. Increasing quality allows testing more things. Better understanding the product allows reducing unnecessary code. Improving flow means we can learn faster.

When a pillar is missing…

Over time, we've also seen the opposite: what happens when one of the pillars isn't there. Sometimes it seems the team is functioning, but there's something that doesn't quite fit, and eventually the bill always comes due.

  • Teams without autonomy become mere executors, without impact or motivation.
  • Teams without technical practices end up trapped in their own complexity, unable to evolve without breaking things.
  • Teams without focus on value are capable of building fast… fast garbage.

And many times, the problem isn't technical but structural. As Kent Beck aptly points out:

"The problem for software development is that Taylorism implies a social structure of work... and it is bizarrely unsuited to software development." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 18: Taylorism and Software

In some contexts you can work to recover balance. But there are also times when the environment itself doesn't allow it. When there's no room for the team to make decisions, not even to improve their own dynamics or tools, the situation becomes very difficult to sustain. In my case, when it hasn't been possible to change that from within, I've preferred to directly change contexts.


Just one way among many

Everything I'm telling here comes from my experience in product companies. Teams that build systems that have to evolve, that have long lives, that are under business pressure and that can't afford to throw everything in the trash every six months.
It's not the only possible context. In environments more oriented to services or consulting, the dynamics can be different. You work with different rhythms, different responsibilities, and different priorities. I don't have direct experience in those contexts, so I won't opine on what would work best there.

I just want to make clear that what I'm proposing is one way, not the way. But I've also seen many others that, without some minimum pillars of technical discipline, focus on real value, and a constant search for efficiency, simply don't work in the medium or long term in product environments that need to evolve. My experience tells me that, while you don't have to follow this to the letter, you also can't expect great results if you dedicate yourself to 'messing up' the code, building without understanding the problem, or generating waste everywhere.

This combination, on the other hand, is the one that has most often withstood the passage of time, changes in direction, pressure, and uncertainty. And it's the one that has made many teams not only function well, but enjoy what they do.

Final reflection

Building sustainable software isn't just a technical matter. It's a balance between doing the right thing, doing it well, and doing it without waste. And for that, we need more than practices or processes. We need a way of working that allows us to think, decide, and build with purpose.

In our case, that has meant relying on three legs: XP, Lean, and Product Engineering. We haven't always had all three at once. Sometimes we've had to strengthen one to be able to advance with the others. But when they're present, when they reinforce each other, the result is a team that can deliver value continuously, adapt, and grow without burning out.

I hope this article helps you reflect on how you work, which legs you have strongest, and which ones you could start to balance.