Saturday, October 11, 2025

Good talks/podcasts (Oct I) / AI & AI - Augmented Coding Edition!

These are the best podcasts/talks I've seen/listened to recently: Reminder: All of these talks are interesting, even just listening to them.

You can now explore all recommended talks and podcasts interactively on our new site: The new site allows you to:
  • 🏷️ Browse talks by topic
  • 👤 Filter by speaker
  • 🎤 Search by conference
  • 📅 Navigate by year
Feedback Welcome!
Your feedback and suggestions are highly appreciated to help improve the site and content. Feel free to contribute or share your thoughts!
Related:

Friday, October 10, 2025

AI and Lean Software Development: Reflections from Experimentation

Exploring how artificial intelligence might be changing the rules of the game in software development - preliminary insights from the trenches

An exploration into uncharted territory

I want to be transparent from the start: what I'm about to share are not definitive conclusions or proven principles. These are open reflections that emerge from a few months of intense personal experimentation with AI applied to software development, exploring its possibilities and trying to understand how this affects the Lean practices we typically use.

These thoughts are not definitive conclusions based on prolonged experience, but rather open reflections that I would like to continue experimenting with and discussing with others interested in this fascinating topic. I'm not speaking as someone who already has the answers, but as someone exploring fascinating questions and suspecting that we're facing a paradigm shift that we're only beginning to understand.

The fundamental paradox: speed versus validation

A central idea I'm observing is that, although artificial intelligence allows us to work faster, this doesn't mean we should automatically expand the initial scope of our functionalities. My intuition tells me that we should continue delivering value in small increments, validate quickly, and decide based on real feedback rather than simply on the speed at which we can now execute tasks.

But there's an interesting nuance I've started to consider: in low-uncertainty contexts, where both the value and implementation are clear and the team is very confident, it might make sense to advance a bit more before validating. However, my feelings lead me to think that maintaining discipline to avoid falling into speculative design is fundamental, because although AI facilitates it, it can jeopardize the simplicity and future flexibility of the system.

The cognitive crisis we don't see coming

Chart: While development speed with AI grows exponentially, our human cognitive capacity remains constant, creating a "danger zone" where we can create complexity faster than we can manage it.

Here I do have a conviction that becomes clearer every day: we should now be much more radical when it comes to deleting and eliminating code and functionalities that aren't generating the expected impact.

What this visualization shows me is something I feel viscerally: we have to be relentless to prevent complexity from devouring us, because no matter how much AI we have, human cognitive capacity hasn't changed - both for managing technical complexity and for users to manage the growing number of applications and functionalities.

We're at that critical point where the blue line (AI speed) crosses the red line (our capacity), and my intuition tells me that either we develop radical disciplines now, or we enter that red zone where we create more complexity than we can handle.

The paradox of amplified Lean

But here's the crux of the matter, and I think this table visualizes it perfectly:

Table: AI eliminates the natural constraints that kept us disciplined (at least some of us), creating the paradox that we need to artificially recreate those constraints through radical discipline.

This visualization seems to capture something fundamental that I'm observing: AI eliminates the natural constraints that kept us applying Lean principles. Before, the high cost of implementation naturally forced us to work in small batches. Now we have to recreate that discipline artificially.

For example, look at the "Small Batches" row: traditionally, development speed was the natural constraint that forced us to validate early. Now, with AI, that brake disappears and we risk unconscious scope growth. The countermeasure isn't technical, it's cultural: explicitly redefining what "small" means in terms of cognitive load, not time.

The same happens with YAGNI: before, the high cost of implementation was a natural barrier against speculative design. Now AI "suggests improvements" and makes overengineering tempting and easy. The answer is to make YAGNI even more explicit.

This is the paradox that fascinates me most: we have to become more disciplined precisely when technology makes it easier for us.

From this general intuition, I've identified several specific patterns that concern me and some opportunities that excite me. These are observations that arise from my daily experimentation, some clearer than others, but all seem relevant enough to share and continue exploring.

About scope and complexity

Change in the "default size" of work

AI facilitates the immediate development of functionalities or refactors, which can unconsciously lead us to increase their size. The risk I perceive is losing the discipline of small batch size crucial for early validation.

Ongoing exploration: My intuition suggests explicitly redefining what "small" means in an AI context, focused on cognitive size and not just implementation time. One way to achieve this is by relying on practices like BDD/ATDD/TDD to limit each cycle to a test or externally validable behavior.

Amplified speculative design

On several occasions I've had to undo work done by AI because it tries to do more than necessary. I've observed that AI lacks sensitivity to object-oriented design and has no awareness of the complexity it generates, creating it very quickly until reaching a point where it can't escape and enters a loop, fixing one thing and breaking others.

Reflection: This suggests reinforcing deliberate practices like TDD, walking skeletons, or strict feature toggles.

New type of "overengineering"

My initial experience suggests that the ease AI offers can lead to adding unnecessary functionalities. It's not the classic overengineering of the architect who designs a cathedral when you need a cabin. It's more subtle: it's adding "just one more feature" because it's easy, it's creating "just one additional abstraction" because AI can generate it quickly.

Key feeling: Reinforcing the YAGNI principle even more explicitly seems necessary.

About workflow and validations

Differentiating visible work vs. released work

My experience indicates that rapid development shouldn't confuse "ready to deploy" with "ready to release." My feeling is that keeping the separation between deployment and release clear remains fundamental.

I've also developed several times small functionalities that then weren't used. Although, to be honest, since I have deeply internalized eliminating waste and baseline cost, I simply deleted the code afterwards.

Opportunity I see: AI can accelerate development while we validate with controlled tests like A/B testing.

More work in progress, but with limits

Although AI can allow more parallel work, my intuition tells me this can fragment the team's attention and complicate integration. It's tempting to have three or four features "in development" simultaneously because AI makes them progress quickly.

My current preference: Use AI to reduce cycle time per story, prioritizing fast feedback, instead of parallelizing more work.

Change in the type of mistakes we make

My observations suggest that with AI, errors can propagate quickly, generating unnecessary complexity or superficial decisions. A superficial decision or a misunderstanding of the problem can materialize into functional code before I've had time to reflect on whether it's the right direction.

Exploration: My intuition points toward reinforcing cultural and technical guardrails (tests, decision review, minimum viable solution principle).

About culture and learning

Impact on culture and learning

I feel there's a risk of over-relying on AI, which could reduce collective reflection. Human cognitive capacity hasn't changed, and we're still better at focusing on few things at a time.

Working intuition: AI-assisted pair programming, ownership rotations, and explicit reviews of product decisions could counteract this effect.

Ideas I'm exploring to manage these risks

After identifying these patterns, the natural question is: what can we do about it? The following are ideas I'm exploring, some I've already tried with mixed results, others are working hypotheses I'd like to test. To be honest, we're in a very embryonic phase of understanding all this.

Discipline in Radical Elimination My intuition suggests introducing periodic "Deletion Reviews" to actively eliminate code without real impact. Specific sessions where the main objective is to identify and delete what isn't generating value.

"Sunset by Default" for experiments The feeling is that we might need an explicit automatic expiration policy for unvalidated experiments. If they don't demonstrate value in X time, they're automatically eliminated, no exceptions.

More rigorous Impact Tracking My experience leads me to think about defining explicit impact criteria before writing code and ruthlessly eliminating what doesn't meet expectations in the established time.

Fostering a "Disposable Software" Mentality My feeling is that explicitly labeling functionalities as "disposable" from the start could psychologically facilitate elimination if they don't meet expectations.

Continuous reduction of "AI-generated Legacy" I feel that regular sessions to review automatically generated code and eliminate unnecessary complexities that AI introduced without us noticing could be valuable.

Radically Reinforcing the "YAGNI" Principle My intuition tells me we should explicitly integrate critical questions in reviews to avoid speculative design: "Do we really need this now? What evidence do we have that it will be useful?"

Greater rigor in AI-Assisted Pair Programming My initial experience suggests promoting "hybrid Pair Programming" to ensure sufficient reflection and structural quality. Never let AI make architectural decisions alone.

A fascinating opportunity: Cross Cutting Concerns and reinforced YAGNI

Beyond managing risks, I've started to notice something promising: AI also seems to open new possibilities for architectural and functional decisions that traditionally had to be anticipated from the beginning.

I'm referring specifically to elements like:

  • Internationalization (i18n): Do we really need to design for multiple languages from day one?
  • Observability and monitoring: Can we start simple and add instrumentation later?
  • Compliance: Is it possible to build first and adapt regulations later?
  • Horizontal scalability and adaptation to distributed architectures: Can we defer these decisions until we have real evidence of need?

My feeling is that these decisions can be deliberately postponed and introduced later thanks to the automatic refactoring capabilities that AI seems to provide. This could further strengthen our ability to apply YAGNI and defer commitment.

The guardrails I believe are necessary

For this to work, I feel we need to maintain certain technical guardrails:

  • Clear separation of responsibilities: So that later changes don't break everything
  • Solid automated tests: To refactor with confidence
  • Explicit documentation of deferred decisions: So we don't forget what we deferred
  • Use of specialized AI for architectural spikes: To explore options when the time comes

But I insist: these are just intuitions I'd like to validate collectively.

Working hypotheses I'd love to test

After these months of experimentation, these are the hypotheses that have emerged and that I'd love to discuss and test collectively:

1. Speed ≠ Amplitude

Hypothesis: We should use AI's speed to validate faster, not to build bigger.

2. Radical YAGNI

Hypothesis: If YAGNI was important before, now it could be critical. Ease of implementation shouldn't justify additional complexity.

3. Elimination as a central discipline

Hypothesis: Treat code elimination as a first-class development practice, not as a maintenance activity.

4. Hybrid Pair Programming

Hypothesis: Combining AI's speed with human reflection could be key. Never let AI make architectural decisions alone.

5. Reinforced deployment/release separation

Hypothesis: Keep this separation clearer than ever. Ease of implementation could create mirages of "finished product."

6. Deferred cross-cutting concerns

Hypothesis: We can postpone more architectural decisions than before, leveraging AI's refactoring capabilities.

An honest invitation to collective learning

Ultimately, these are initial ideas and reflections, open to discussion, experimentation, and learning. My intuition tells me that artificial intelligence is radically changing the way we develop products and software, enhancing our capabilities, but suggesting the need for even greater discipline in validation, elimination, and radical code simplification.

My strongest hypothesis is this: AI amplifies both our good and bad practices. If we have the discipline to maintain small batches, validate quickly, and eliminate waste, AI could make us extraordinarily effective. If we don't have it, it could help us create disasters faster than ever.

But this is just a hunch that needs validation.

What experiences have you had? Have you noticed these same patterns, or completely different ones? What practices are you trying? Have you noticed these same effects in your teams? What feelings does the integration of AI in your Lean processes generate for you?

We're in the early stages of understanding all this. We need perspectives from the entire community to navigate this change that I sense could be paradigmatic, but that I still don't fully understand.

Let's continue the conversation. The only way forward is exploring together.

Do these reflections resonate with you? Have you noticed similar or completely different patterns? I'd love to hear about your experience and continue learning together in this fascinating and still unexplored territory.

Sunday, October 05, 2025

Lean + XP + Product Thinking – Three Pillars for Sustainable Software Development

When we talk about developing software sustainably, we're not just talking about taking care of the code or going at a reasonable pace. We're referring to the ability to build useful products, with technical quality, in a continuous flow, without burning out the team and without making the system collapse with every change. It's a difficult balance. However, in recent years I've been seeing how certain approaches have helped us time and again to maintain it.

These aren't new or particularly exotic ideas. But when combined well, they can make all the difference. I'm referring to three concrete pillars: Extreme Programming (XP) practices, Lean thinking applied to software development, and a product mindset that drives us to build with purpose and understand the "why" behind every decision.

Curiously, although sometimes presented as different approaches, XP and Lean Software Development have very similar roots and objectives. In fact, many of Lean's principles—such as eliminating waste, optimizing flow, or fostering continuous learning—are deeply present in XP's way of working. This is no coincidence: Kent Beck, creator of XP, was one of the first to apply Lean thinking to software development, even before it became popular under that name. As he himself wrote:

"If you eliminate enough waste, soon you go faster than the people who are just trying to go fast." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

This quote from Kent Beck encapsulates the essence of efficiency through the elimination of the superfluous.

I don't intend to say this is the only valid way to work, nor that every team has to function this way. But I do want to share that, in my experience, when these three pillars are present and balanced, it's much easier to maintain a sustainable pace, adapt to change, and create something that truly adds value. And when one is missing, it usually shows.

This article isn't a recipe, but rather a reflection on what we've been learning as teams while building real products, with long lifecycles, under business pressure and with the need to maintain technical control—a concrete way that has worked for us to do "the right thing, the right way… and without waste."

Doing the right thing, the right way… and without waste

There's a phrase I really like that well summarizes the type of balance we seek: doing the right thing, the right way. This phrase has been attributed to Kent Beck and has also been used by Martin Fowler in some contexts. In our experience, this phrase falls short if we don't add a third dimension: doing it without waste, efficiently and smoothly. Because you can be doing the right thing, doing it well, and still doing it at a cost or speed that makes it unsustainable.



Over the years, we've seen how working this way—doing the right thing, the right way and without waste—requires three pillars:

  • Doing the right thing implies understanding what problem needs to be solved, for whom, and why. And this cannot be delegated outside the technical team. It requires that those who design and develop software also think about product, impact, and business. This is what in many contexts has been called Product Mindset: seeing ourselves as a product team, where each person acts from their discipline, but always with a product perspective.
  • Doing it the right way means building solutions that are maintainable, testable, that give us confidence to evolve without fear and that do so at a sustainable pace, respecting people. This is where Extreme Programming practices come into full play.
  • And doing it without waste leads us to optimize workflow, eliminate everything that doesn't add value, postpone decisions that aren't urgent, and reduce the baseline cost of the system. Again, much of Lean thinking helps us here.

These three dimensions aren't independent. They reinforce each other. When one fails, the others usually suffer. And when we manage to have all three present, even at a basic level, that's when the team starts to function smoothly and with real impact.

The three pillars

Over time, we've been seeing that when a team has these three pillars present—XP, Lean Thinking, and Product Engineering—and keeps them balanced, the result is a working system that not only functions, but endures. It endures the passage of time, strategy changes, pressure peaks, and difficult decisions.

1. XP: evolving without breaking

Extreme Programming practices are what allow us to build software that can be changed. Automated tests, continuous integration, TDD, simple design, frequent refactoring… all of this serves a very simple idea: if we want to evolve, we need very short feedback cycles that allow us to gain confidence quickly.

With XP, quality isn't a separate goal. It's the foundation upon which everything else rests. Being able to deploy every day, run experiments, try new things, reduce the cost of making mistakes… all of that depends on the system not falling apart every time we touch something.

"The whole organization is a quality organization." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

I remember, at Alea, changing the core of a product (fiber router provisioning system) in less than a week, going from working synchronously to asynchronously. We relied on the main tests with business logic and gradually changed all the component's entry points, test by test. Or at The Motion, where we changed in parallel the entire architecture of the component that calculated the state and result of the video batches we generated, so it could scale to what the business needed.

Making these kinds of changes in a system that hadn't used good modern engineering practices (XP/CD) would have been a nightmare, or even completely discarded, opting for patch upon patch until having to declare technical bankruptcy and rebuild the system from scratch.

However, for us, thanks to XP, it was simply normal work: achieving scalability improvements or adapting a component to manufacturer changes. Nothing exceptional.

None of this would be possible without teams that can maintain a constant pace over time, because XP doesn't just seek to build flexible and robust systems, but also to care for the people who develop them.

XP not only drives the product's technical sustainability, but also a sustainable work pace, which includes productive slack to be creative, learn, and innovate. It avoids "death marches" and heroic efforts that exhaust and reduce quality. Kent Beck's 40-hour work week rule reflects a key idea: quality isn't sustained with exhausted teams; excessive hours reduce productivity and increase errors.

2. Lean Thinking: focus on value and efficiency

Lean thinking gives us tools to prioritize, simplify, and eliminate the unnecessary. It reminds us that doing more isn't always better, and that every line of code we write has a maintenance cost. Often, the most valuable thing we can do is build nothing at all.

We apply principles like eliminating waste, postponing decisions until the last responsible moment (defer commitment), measuring flow instead of utilization, or systematically applying YAGNI. This has allowed us to avoid premature complexities and reduce unnecessary work.

In all the teams I've worked with, we've simplified processes: eliminating ceremonies, working in small and solid steps, dispensing with estimates and orienting ourselves to continuous flow. Likewise, we've reused "boring" technology before introducing new tools, and always sought to minimize the baseline cost of each solution, eliminating unused functionalities when possible.

I remember, at Alea, that during the first months of the fiber router provisioning system we stored everything in a couple of text files, without a database. This allowed us to launch quickly and migrate to something more complex only when necessary. Or at Clarity AI, where our operations bot avoided maintaining state by leveraging what the systems it operates (like AWS) already store and dispensed with its own authentication and authorization system, using what Slack, its main interface, already offers.

These approaches have helped us focus on the essential, reduce costs, and maintain the flexibility to adapt when needs really require it.

3. Product Mindset: understanding the problem, not just building the solution

And finally, the pillar that's most often forgotten or mentally outsourced: understanding the problem.
As a product team, we can't limit ourselves to executing tasks from our disciplines; we need to get involved in the impact of what we build, in the user experience, and in the why of each decision.

When the team assumes this mindset, the way of working changes completely. The dividing line between "business" and "technology" disappears, and we start thinking as a whole. It doesn't mean everyone does everything, but we do share responsibility for the final result.

In practice, this implies prioritizing problems before solutions, discarding functionalities that don't add value even if they're already planned, and keeping technical options open until we have real data and feedback. Work is organized in small and functional vertical increments, delivering improvements almost daily to validate hypotheses with users and avoid large deliveries full of uncertainty. Thanks to this, this approach allows adapting critical processes in just a few hours to changes in requirements or context, without compromising stability or user experience.

Not all pillars appear at once

One of the things I've learned over time is that teams don't start from balance. Sometimes you inherit a team with a very solid technical level, but with no connection to the product. Other times you arrive at a team that has good judgment about what to build, but lives in a trench of impossible-to-maintain code. Or the team is so overwhelmed by processes and dependencies that it can't even get to production smoothly.

The first thing, in those cases, isn't to introduce a methodology or specific practice. It's to understand. See which of the pillars is weakest and work on improving it until, at least, it allows you to move forward. If the team can't deploy without suffering, it matters little that they perfectly understand the product. If the team builds fast but what they make is used by no one, the problem is elsewhere.

Our approach has always been to seek a certain baseline balance, even at a very initial level, and from there improve on all three pillars at once. In small steps. Without major revolutions.

The goal isn't to achieve perfection in any of the three, but to prevent any one from failing so badly that the team gets blocked or frustrated. When we manage to have all three reasonably present, improvement feeds back on itself. Increasing quality allows testing more things. Better understanding the product allows reducing unnecessary code. Improving flow means we can learn faster.

When a pillar is missing…

Over time, we've also seen the opposite: what happens when one of the pillars isn't there. Sometimes it seems the team is functioning, but there's something that doesn't quite fit, and eventually the bill always comes due.

  • Teams without autonomy become mere executors, without impact or motivation.
  • Teams without technical practices end up trapped in their own complexity, unable to evolve without breaking things.
  • Teams without focus on value are capable of building fast… fast garbage.

And many times, the problem isn't technical but structural. As Kent Beck aptly points out:

"The problem for software development is that Taylorism implies a social structure of work... and it is bizarrely unsuited to software development." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 18: Taylorism and Software

In some contexts you can work to recover balance. But there are also times when the environment itself doesn't allow it. When there's no room for the team to make decisions, not even to improve their own dynamics or tools, the situation becomes very difficult to sustain. In my case, when it hasn't been possible to change that from within, I've preferred to directly change contexts.


Just one way among many

Everything I'm telling here comes from my experience in product companies. Teams that build systems that have to evolve, that have long lives, that are under business pressure and that can't afford to throw everything in the trash every six months.
It's not the only possible context. In environments more oriented to services or consulting, the dynamics can be different. You work with different rhythms, different responsibilities, and different priorities. I don't have direct experience in those contexts, so I won't opine on what would work best there.

I just want to make clear that what I'm proposing is one way, not the way. But I've also seen many others that, without some minimum pillars of technical discipline, focus on real value, and a constant search for efficiency, simply don't work in the medium or long term in product environments that need to evolve. My experience tells me that, while you don't have to follow this to the letter, you also can't expect great results if you dedicate yourself to 'messing up' the code, building without understanding the problem, or generating waste everywhere.

This combination, on the other hand, is the one that has most often withstood the passage of time, changes in direction, pressure, and uncertainty. And it's the one that has made many teams not only function well, but enjoy what they do.

Final reflection

Building sustainable software isn't just a technical matter. It's a balance between doing the right thing, doing it well, and doing it without waste. And for that, we need more than practices or processes. We need a way of working that allows us to think, decide, and build with purpose.

In our case, that has meant relying on three legs: XP, Lean, and Product Engineering. We haven't always had all three at once. Sometimes we've had to strengthen one to be able to advance with the others. But when they're present, when they reinforce each other, the result is a team that can deliver value continuously, adapt, and grow without burning out.

I hope this article helps you reflect on how you work, which legs you have strongest, and which ones you could start to balance.

Friday, September 19, 2025

Charla: Incentivos perversos, resultados previsibles. Cuando el sistema sabotea a los equipos

Hoy he tenido la oportunidad de participar en FredCon 25 con la charla “Incentivos perversos, resultados previsibles: Cuando el sistema sabotea a los equipos”. Quiero agradecer sinceramente a la organización por la invitación y por generar el espacio para debatir un tema que me importa muchísimo.

Sobre la charla

Exploro cómo la aplicación continuada del Taylorismo en el desarrollo de software y producto genera problemas sistémicos como:

  • La trampa de la eficiencia de recursos.
  • La desconexión del propósito.
  • La trampa de la calidad diferida.

Cuando el sistema está mal diseñado, aparecen resultados previsibles pero indeseables: cuellos de botella crónicos, retrabajo constante, productos irrelevantes, fuga de talento y deuda técnica. Sucede porque se premian incentivos locales y la ocupación individual por encima del rendimiento global del sistema.

Idea clave: estos problemas no son fallos puntuales de los equipos, sino consecuencias inevitables de un sistema mal diseñado. La solución pasa por cambiar el sistema, promoviendo la colaboración, la responsabilidad compartida, la optimización global, la autonomía con propósito y el aprendizaje continuo para lograr impacto y velocidad sostenibles.

Slides

El vídeo aún no está disponible, pero ya puedes consultar las diapositivas. Incluyen bastantes notas con ejemplos y explicaciones adicionales que ayudan a seguir el hilo más allá de lo que se ve en pantalla.

Documento original con notas (Google Slides)

Abrir las slides en pestaña nueva

Gracias de nuevo a la organización de FredCon 25 y a todas las personas que asististeis. Ojalá este material sirva para abrir conversaciones sobre cómo diseñar sistemas que potencien a los equipos en lugar de sabotearlos.


Referencias de los conceptos principales

Resource Efficiency vs Flow Efficiency:
Systems Thinking and Quality:
  • The Red Bead Experiment (14m) by W. Edwards Deming — A powerful demonstration that performance and quality depend on the system, not individual effort. A reminder that systemic issues require systemic fixes.

Monday, September 01, 2025

My premises about engineering leadership and people management

As with software development, I find it useful to write down the principles that guide how I think about leadership and people management. These are not universal truths, but they are the foundations I rely on when building and leading engineering teams. By making them explicit, I aim to create clarity, alignment, and a shared understanding of how I see leadership, collaboration, and growth.

  1. People are not resources I don't believe in treating people as interchangeable units or "resources." People are creative, motivated, and responsible when given the right environment. My default stance is trust, not control. This aligns with Theory Y (Douglas McGregor): most people want to do good work if they are respected, trusted, and given meaningful challenges.
  2. Empower teams with ownership Teams work best when they own the whole problem end-to-end, from idea to operation. Ownership gives autonomy and accountability, while purpose provides alignment. An empowered team doesn't just execute tasks but makes decisions and cares about outcomes.
  3. Motivation is intrinsic The best results in knowledge work come from intrinsic motivation, not external rewards or pressure. As Daniel Pink highlights in Drive, autonomy, mastery, and purpose are the real drivers of excellence. My role as a leader is to foster these conditions, not to rely on carrots and sticks. 
  4. Learning never stops Engineering and product work are constant processes of discovery. We optimize for fast feedback, safe experiments, and collective learning. Mistakes are not failures to hide but opportunities to grow. A learning culture is the foundation of adaptability. 
  5. Enable experiments, eliminate waste, make quality inevitable My role as a leader is to create conditions for safe experimentation while relentlessly removing what doesn't add value. But more fundamentally, I must change the system so that working with quality becomes the most natural, simplest, and fastest path. This means building strong foundations, aligning incentives, and creating space for learning where teams ask "What's the worst that could happen?" with confidence. When quality is inevitable rather than heroic, we create virtuous cycles where technical excellence enables bold experiments and meaningful impact.

These are my personal premises about engineering leadership and people management. They are not universal truths, but the foundations I rely on when building and leading teams. I believe that making these principles explicit helps create clarity, alignment, and better collaboration.


References:

Sunday, August 24, 2025

Good talks/podcasts (Aug)

These are the best podcasts/talks I've seen/listened to recently:
  • Red Bead Experiment with Dr. W. Edwards Deming 🔗 talk notes (W. Edwards Deming) [Lean, Quality] [Duration: 00:09] (⭐⭐⭐⭐⭐) The Red Bead Experiment vividly demonstrates that quality and performance are products of the system, not merely individual effort or willingness to do one's best.
  • Vibe Coding Is The WORST IDEA Of 2025 🔗 talk notes (Dave Farley) [AI, Software Design, testing] [Duration: 00:17] This talk critically examines "vibe coding," arguing that effective software engineering requires precise problem definition, structured thinking, and robust automated testing to manage complexity and enable evolvability, rather than relying on vague AI-assisted code generation.
  • Diversity, AI, and Junior Engineers with Meri Williams 🔗 talk notes (Meri Williams) [AI, Diversity, Engineering Culture, leadership] [Duration: 00:52] (⭐⭐⭐⭐⭐) This talk explores how AI is changing the development and growth of engineers, particularly junior co-workers, emphasizing the increased importance of foundational skills, critical thinking, and the shift from writing to reviewing code in the age of AI.
  • Continuous Deployment and Pair Programming for Lean Software Delivery Even without Jira 🔗 talk notes (Asgaut Mjølne Söderbom, Ola Hast) [Continuous Delivery, Lean Software Development, Technical Practices] [Duration: 00:54] (⭐⭐⭐⭐⭐) This talk details how a tech company achieved 5-minute code-to-production and high quality in banking software through practices like pair programming, TDD, continuous deployment, and fostering a lean, people-centric culture.
  • Lean Product Development: Resource management vs. Flow efficiency 🔗 talk notes (Johanna Rothman) [Flow, Lean, Lean Product Management, Teams] [Duration: 00:24] (⭐⭐⭐⭐⭐) A compelling talk on Lean Product Development that contrasts resource efficiency with flow efficiency, demonstrating how optimizing for flow through cross-functional teams enhances project delivery and portfolio management.
  • From Noob to Automated Evals In A Week (as a PM) w/Teresa Torres 🔗 talk notes (Teresa Torres) [AI, Feedback cycles, Generative AI, Product Discovery] [Duration: 01:10] (⭐⭐⭐⭐⭐) Product discovery expert Teresa Torres recounts her journey from a self-described "noob" in AI to implementing automated evaluations for an AI-powered interview coach within a week, detailing her iterative process of building and refining the tool with significant LLM assistance.
Reminder: All of these talks are interesting, even just listening to them.

You can now explore all recommended talks and podcasts interactively on our new site: The new site allows you to:
  • 🏷️ Browse talks by topic
  • 👤 Filter by speaker
  • 🎤 Search by conference
  • 📅 Navigate by year
Feedback Welcome!
Your feedback and suggestions are highly appreciated to help improve the site and content. Feel free to contribute or share your thoughts!
Related:

Thursday, August 14, 2025

Lean, XP y Mentalidad de producto. Tres pilares para un desarrollo sostenible

Cuando hablamos de desarrollar software de forma sostenible, no estamos hablando solo de cuidar el código o de ir a una velocidad razonable. Nos referimos a la capacidad de construir productos útiles, con calidad técnica, en un flujo continuo, sin quemar al equipo y sin hacer que el sistema colapse con cada cambio.Es un equilibrio difícil. Sin embargo, en los últimos años he ido viendo cómo ciertos enfoques nos han ayudado una y otra vez a mantenerlo.

No son ideas nuevas ni especialmente exóticas. Pero cuando se combinan bien, pueden marcar la diferencia. Me refiero a tres pilares concretos: las prácticas de Extreme Programming (XP), el pensamiento Lean aplicado al desarrollo de software, y una mentalidad de producto que nos impulsa a construir con propósito y entender el “porqué” detrás de cada decisión.

Curiosamente, aunque a veces se presentan como enfoques distintos, XP y Lean Software Development tienen raíces y objetivos muy similares. De hecho, muchos de los principios de Lean —como eliminar desperdicio, optimizar el flujo o fomentar el aprendizaje continuo— están profundamente presentes en la forma de trabajar de XP. No es casualidad: Kent Beck, creador de XP, fue uno de los primeros en aplicar el pensamiento Lean al desarrollo de software, incluso antes de que este se popularizara con ese nombre. Como él mismo escribía:

"If you eliminate enough waste, soon you go faster than the people who are just trying to go fast." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

Esta cita de Kent Beck resume la esencia de la eficiencia a través de la eliminación de lo superfluo.

No pretendo decir que esta sea la única forma válida de trabajar, ni que todo equipo tenga que funcionar así. Pero sí quiero compartir que, en mi experiencia, cuando estos tres pilares están presentes y equilibrados, es mucho más fácil mantener un ritmo sostenible, adaptarse al cambio, y crear algo que realmente aporta valor. Y cuando falta alguno, por lo general, se nota.

Este artículo no es una receta, sino una reflexión sobre lo que hemos ido aprendiendo como equipos al construir productos reales, con ciclos de vida largos, bajo presión de negocio y con la necesidad de mantener el control técnico, una forma concreta que nos ha funcionado para hacer “la cosa correcta, de la forma correcta… y sin desperdicio”.

Hacer la cosa correcta, de forma correcta… y sin desperdicio

Hay una frase que me gusta mucho y que resume bien el tipo de equilibrio que buscamos: hacer la cosa correcta, de la forma correcta. Esta frase se ha atribuido a Kent Beck y también la ha usado Martin Fowler en algunos contextos. En nuestra experiencia, esta frase se queda corta si no añadimos una tercera dimensión: hacerlo sin desperdicio, de forma eficiente y fluida. Porque puedes estar haciendo lo correcto, hacerlo bien, y aun así hacerlo con un coste o una lentitud que lo haga insostenible.



A lo largo de los años, hemos visto cómo trabajar de esta manera —haciendo lo correcto, de forma correcta y sin desperdicio— requiere tres pilares:

  • Hacer lo correcto implica entender qué problema hay que resolver, para quién, y por qué. Y esto no se puede delegar fuera del equipo técnico. Requiere que quienes diseñan y desarrollan el software también piensen en producto, en impacto y en negocio. Es lo que en muchos contextos se ha llamado Mentalidad de Producto: vernos como un equipo de producto, donde cada persona actúa desde su disciplina, pero siempre con una mirada de producto.
  • Hacerlo de forma correcta significa construir soluciones que sean mantenibles, testeables, que nos den confianza para evolucionar sin miedo y que lo hagan a un ritmo sostenible, respetando a las personas. Aquí entran de lleno las prácticas de Extreme Programming.
  • Y hacerlo sin desperdicio nos lleva a optimizar el flujo de trabajo, a eliminar todo lo que no aporta valor, a posponer decisiones que no son urgentes y a reducir el coste basal del sistema. De nuevo, mucho del pensamiento Lean nos ayuda aquí.

Estas tres dimensiones no son independientes. Se refuerzan entre sí. Cuando falla una, normalmente se resienten las otras. Y cuando conseguimos que las tres estén presentes, aunque sea a un nivel básico, es cuando el equipo empieza a funcionar con fluidez y con impacto real.

Los tres pilares

A lo largo del tiempo, hemos ido viendo que cuando un equipo tiene estos tres pilares presentes —XP, Lean Thinking y Product Engineering— y los mantiene equilibrados, el resultado es un sistema de trabajo que no solo funciona, sino que aguanta. Aguanta el paso del tiempo, los cambios de estrategia, los picos de presión y las decisiones difíciles.

1. XP: evolucionar sin romper

Las prácticas de Extreme Programming son lo que nos permiten construir software que se puede cambiar. Tests automatizados, integración continua, TDD, diseño simple, refactorización frecuente… todo eso está al servicio de una idea muy simple: si queremos evolucionar, necesitamos ciclos de retroalimentación muy cortos que nos permitan ganar confianza rápidamente.

Con XP, la calidad no es un objetivo separado. Es la base sobre la que se apoya todo lo demás. Poder desplegar cada día, hacer experimentos, probar cosas nuevas, reducir el coste de equivocarse… todo eso depende de que el sistema no se nos caiga cada vez que tocamos algo.

"The whole organization is a quality organization." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 19: Toyota Production System

Recuerdo, en Alea, cambiar el núcleo de un producto (sistema de provisión de routers de fibra) en menos de una semana, pasando de funcionar de forma síncrona a asíncrona. Nos apoyamos en los tests principales con la lógica de negocio y fuimos cambiando todos los puntos de entrada del componente poco a poco, test a test. O en The Motion, donde cambiamos en paralelo toda la arquitectura del componente que calculaba el estado y el resultado de los lotes de vídeos que generábamos, para que pudiera escalar a lo que el negocio necesitaba.

Hacer este tipo de cambios en un sistema que no hubiese usado buenas prácticas de ingeniería moderna (XP/CD) habría sido un drama, o incluso se habría descartado totalmente, optando por el parche sobre parche hasta tener que declarar la bancarrota técnica y rehacer el sistema desde cero.

Sin embargo, para nosotros, gracias a XP, era simplemente trabajo normal: conseguir mejoras de escalabilidad o adaptar un componente a cambios del fabricante. Nada excepcional.

Nada de esto sería posible sin equipos que puedan mantener un ritmo constante en el tiempo, porque XP no solo se busca construir sistemas flexibles y robustos, sino también de cuidar a las personas que los desarrollan.

XP no sólo impulsa la sostenibilidad técnica del producto, sino también un ritmo de trabajo sostenible, que incluye holgura productiva para ser creativos, aprender e innovar. Evita “marchas de la muerte” y esfuerzos heroicos que agotan y reducen la calidad. La regla de las 40 horas semanales de Kent Beck refleja una idea clave: la calidad no se sostiene con equipos exhaustos; el exceso de horas reduce la productividad y aumenta los errores.

2. Lean Thinking: foco en valor y eficiencia

El pensamiento Lean nos da herramientas para priorizar, simplificar y eliminar lo innecesario. Nos recuerda que hacer más no siempre es mejor, y que cada línea de código que escribimos tiene un coste de mantenimiento. A menudo, lo más valioso que podemos hacer es no construir nada.

Aplicamos principios como eliminar desperdicio, posponer decisiones hasta el último momento responsable (defer commitment), medir el flujo en lugar de la ocupación o aplicar sistemáticamente YAGNI. Esto nos ha permitido evitar complejidades prematuras y reducir el trabajo innecesario.

En todos los equipos con los que he trabajado hemos simplificado procesos: eliminando ceremonias, trabajando en pasos pequeños y sólidos, prescindiendo de estimaciones y orientándonos a un flujo continuo. Asimismo, hemos reutilizado tecnología “aburrida” antes de introducir nuevas herramientas, y buscado siempre minimizar el coste basal de cada solución, eliminando funcionalidades no usadas cuando es posible.

Recuerdo, en Alea, que durante los primeros meses del sistema de provisión de routers de fibra almacenábamos todo en un par de ficheros de texto, sin base de datos. Esto nos permitió lanzar rápido y migrar a algo más complejo solo cuando fue necesario. O en Clarity AI, donde nuestro bot de operaciones evitó mantener estado aprovechando el que ya guardan los sistemas que opera (como AWS) y prescindió de un sistema propio de autenticación y autorización, usando el que ya ofrece Slack, que es su interfaz principal.

Estos enfoques nos han ayudado a centrarnos en lo esencial, reducir costes y mantener la flexibilidad para adaptarnos cuando las necesidades realmente lo requieren.

3. Mentalidad de Producto: entender el problema, no solo construir la solución

Y, por último, el pilar que más a menudo se olvida o se subcontrata mentalmente: entender el problema.
Como equipo de producto, no podemos limitarnos a ejecutar tareas desde nuestras disciplinas; necesitamos implicarnos en el impacto de lo que construimos, en la experiencia del usuario y en el porqué de cada decisión.

Cuando el equipo asume esta mentalidad, la forma de trabajar cambia por completo. Desaparece la línea divisoria entre “negocio” y “tecnología”, y empezamos a pensar como un todo. No significa que todos hagan de todo, pero sí que compartimos la responsabilidad sobre el resultado final.

En la práctica, esto implica priorizar problemas antes que soluciones, descartar funcionalidades que no aportan valor aunque ya estén planificadas, y mantener abiertas las opciones técnicas hasta contar con datos y feedback reales. El trabajo se organiza en incrementos verticales pequeños y funcionales, entregando mejoras casi a diario para validar hipótesis con usuarios y evitar grandes entregas llenas de incertidumbre. Gracias a esto, este enfoque permite adaptar en pocas horas procesos críticos a cambios de requisitos o contexto, sin comprometer la estabilidad ni la experiencia del usuario.

No todos los pilares aparecen a la vez

Una de las cosas que he aprendido con el tiempo es que los equipos no arrancan desde el equilibrio. A veces heredas un equipo con un nivel técnico muy sólido, pero sin ninguna conexión con el producto. Otras veces llegas a un equipo que tiene buen criterio sobre qué construir, pero que vive en una trinchera de código imposible de mantener. O el equipo está tan ahogado por los procesos y las dependencias que ni siquiera puede salir a producción con fluidez.

Lo primero, en esos casos, no es meter una metodología o una práctica concreta. Es entender. Ver cuál de los pilares está más débil y trabajar en mejorarlo hasta que, al menos, te permita avanzar. Si el equipo no puede desplegar sin sufrir, importa poco que entienda perfectamente el producto. Si el equipo construye rápido pero lo que hace no lo usa nadie, el problema está en otra parte.

Nuestro enfoque ha sido siempre buscar un cierto equilibrio de base, aunque sea en un nivel muy inicial, y a partir de ahí mejorar en los tres pilares a la vez. En pasos pequeños. Sin grandes revoluciones.

El objetivo no es alcanzar la perfección en ninguno de los tres, sino evitar que alguno falle tanto que el equipo quede bloqueado o se frustre. Cuando conseguimos que los tres estén razonablemente presentes, la mejora se retroalimenta. Aumentar la calidad permite probar más cosas. Entender mejor el producto permite reducir código innecesario. Mejorar el flujo hace que podamos aprender más rápido.

Cuando falta algún pilar…

Con el tiempo, también hemos visto lo contrario: qué pasa cuando alguno de los pilares no está. A veces parece que el equipo funciona, pero hay algo que no termina de encajar, y al final siempre acaba apareciendo la factura.

  • Equipos sin autonomía se convierten en meros ejecutores, sin impacto ni motivación.
  • Equipos sin prácticas técnicas acaban atrapados en su propia complejidad, incapaces de evolucionar sin romper cosas.
  • Equipos sin foco en valor son capaces de construir rápido… basura rápida.

Y muchas veces, el problema no es técnico sino estructural. Como bien apunta Kent Beck:

"The problem for software development is that Taylorism implies a social structure of work... and it is bizarrely unsuited to software development." — Kent Beck, Extreme Programming Explained (2nd ed.), Chapter 18: Taylorism and Software

En algunos contextos se puede trabajar para recuperar el equilibrio. Pero también hay veces en las que el propio entorno no lo permite. Cuando no hay espacio para que el equipo tome decisiones, ni siquiera para mejorar sus propias dinámicas o herramientas, la situación se vuelve muy difícil de sostener. En mi caso, cuando no ha sido posible cambiar eso desde dentro, he preferido cambiar de contexto directamente.


Solo una forma entre muchas

Todo lo que cuento aquí viene de mi experiencia en empresas de producto. Equipos que construyen sistemas que tienen que evolucionar, que tienen vidas largas, que están bajo presión de negocio y que no pueden permitirse tirar todo a la basura cada seis meses.
No es el único contexto posible. En entornos más orientados a servicios o consultoría, las dinámicas pueden ser distintas. Se trabaja con otros ritmos, con otras responsabilidades y con otras prioridades. No tengo experiencia directa en esos contextos, por lo que no opinaré sobre lo que funcionaría mejor allí.

Solo quiero dejar claro que esto que propongo es una forma, no la forma. Pero también he visto muchas otras que, sin unos pilares mínimos de disciplina técnica, enfoque en el valor real y una búsqueda constante de la eficiencia, simplemente no funcionan a medio o largo plazo en entornos de producto que necesitan evolucionar. Mi experiencia me dice que, si bien no tienes que seguir esto al pie de la letra, tampoco puedes esperar grandes resultados si te dedicas a 'guarrear' el código, a construir sin entender el problema o a generar desperdicio por doquier.

Esta combinación, en cambio, es la que más veces ha resistido el paso del tiempo, los cambios de dirección, la presión y la incertidumbre. Y es la que ha hecho que muchos equipos no solo funcionen bien, sino que disfruten de lo que hacen.

Reflexión final

Construir software sostenible no es solo una cuestión técnica. Es un equilibrio entre hacer lo correcto, hacerlo bien y hacerlo sin desperdicio. Y para eso, necesitamos algo más que prácticas o procesos. Necesitamos una forma de trabajar que nos permita pensar, decidir y construir con sentido.

En nuestro caso, eso ha pasado por apoyarnos en tres patas: XP, Lean y Product Engineering. No siempre hemos tenido las tres a la vez. A veces hemos tenido que fortalecer una para poder avanzar con las otras. Pero cuando están presentes, cuando se refuerzan entre sí, el resultado es un equipo que puede entregar valor de forma continua, adaptarse, y crecer sin agotarse.

Ojalá este artículo te sirva para reflexionar sobre cómo trabajas, qué patas tienes más fuertes, y cuáles podrías empezar a equilibrar.

Monday, August 11, 2025

Optimize the Whole: From Lean principle to real-world practice

Introduction

"Optimize the Whole" is one of the fundamental principles of Lean and Lean Software Development. It means setting aside local improvements to look at the entire system, from the idea to the user—people, processes, and technology—and aligning everything toward a common goal: delivering value to the user quickly and sustainably.

In my experience, many teams believe they are being efficient because no one ever stops working, but in the end, the user waits weeks (or months) to see real value. I’ve learned that true improvement comes from a systemic vision, working together so that the flow of value moves without friction from end to end.

Fragmentation: The Legacy of Taylorism

The paradigm of software development has often—consciously or unconsciously—inherited principles rooted in Taylorism and Fordism, conceiving the creation of digital products as a fragmented "assembly line." Under this view, each phase (analysis, design, development, QA, operations) becomes a functional silo, where teams specialize in specific tasks and focus on optimizing their local efficiency.

However, what in physical goods manufacturing could generate economies of scale for mass production has over time also shown its limits by sacrificing flexibility and the ability to quickly adapt to changes in demand or user needs.

In software, this translates into chronic bottlenecks, costly handoffs, and a fundamental disconnect between those who define needs and those who implement them. This fragmentation breaks the flow of value, fosters the accumulation of "inventory" in the form of work in progress, and hinders rapid adaptation—resulting in features that don’t solve real problems or that take months to reach the user, undermining the promise of agility and continuous value.


Comparison of Taylorism, Fordism, and Lean Thinking


What Happens When We Don’t Optimize the Whole?

Over the years working with different teams, I’ve observed that when we don’t optimize the whole, we fall into the trap of optimizing locally—almost always with good intentions but with unintended consequences. Teams may become very “efficient” in their own internal metrics, but if they are not aligned with user value, the overall flow slows down. Bottlenecks appear, handoffs multiply, and work gets stuck somewhere waiting.

I’ve seen this especially when engineering or development is seen as a feature factory that “executes” or “implements” what others decide. The team just implements, without understanding the problem, the priority, or the real impact on the user—and without contributing their technical knowledge to decisions. The result: solutions disconnected from real needs, slow feedback, and features that don’t solve the problem.

In my experience, the root cause is usually a functional and fragmented organizational structure inherited from “assembly line” thinking. But I’ve learned that software doesn’t behave like a linear factory. Software needs product teams with end-to-end responsibility, without silos (backend, frontend, QA, ops, design, etc.), and with real autonomy to make decisions and operate what they build.

I’ve found that this is not just a matter of motivation: it’s the only way to optimize the total flow and deliver real value to the user quickly and sustainably.


Bottlenecks and the Theory of Constraints

The Theory of Constraints (TOC) reminds us that in any system there is always at least one constraint that determines the maximum delivery capacity. Identifying and managing that bottleneck is essential to improving the overall flow.




For example, at ClarityAI, in the beginning, features were released to production but could remain behind a feature toggle for weeks, waiting for product to decide when to expose them to users. Even though they were technically ready, value wasn’t flowing.

Another example: when workflows were separated by functions (frontend, backend, data engineering), any change could take weeks because each group optimized its own flow or backlog instead of thinking about the overall flow from the user’s perspective.

(Fortunately, these specific problems at Clarity AI were solved long ago, but they serve as an example of what can happen when we don’t optimize the whole.)

In my experience working with different teams, I’ve learned that speeding everything up indiscriminately only makes the backlog pile up and creates frustration. 
A necessary condition to identify real constraints is that all work is visible—not only development tasks, but also testing, operations, support, automation, documentation, analysis, coordination, etc. If a significant part of the team’s effort is hidden (because it’s not recorded, visualized, or considered “technical work”), it’s very easy for the real constraints to go unnoticed. As Dominica DeGrandis points out in Making Work Visible, what you can’t see, you can’t manage or improve. Making all work visible is a key step for informed decision-making, reducing work in progress, and better focusing continuous improvement efforts.

The key steps are:
  • Identify the constraint. Make it visible and prioritize it.
  • Exploit the constraint. Keep it focused, avoid distractions, and ensure it’s always working on the highest-value items.
  • Subordinate the rest of the system. Adjust rhythms and priorities so as not to overload the constraint.
  • Elevate the constraint. Improve its capacity through automation, training, or process redesign.
  • Review and repeat. There will always be a new constraint after each improvement.

Over the years, I’ve noticed that the more separate stages with queues there are between the idea and the value delivered to the user, the greater the chances of bottlenecks forming. Furthermore, if each stage belongs to a different group (organizationally speaking) that may even have its own agenda, there is likely little interest in optimizing value for the user. In these cases, each group may focus solely on improving its part of the process—or on avoiding being perceived as the bottleneck.



End-to-End Teams and Real Optimization

In every team I’ve built, I’ve insisted that they be product teams with end-to-end responsibility, without silos for QA, operations, security, or design. The reason is simple: if the team doesn’t control or understand the entire flow, it can’t optimize the whole, and it also risks not taking full responsibility for the complete product.

When the same team is in charge of conceiving, building, deploying, and operating, it eliminates the waste that arises in each handoff and accelerates learning. Every member understands the impact of their work on the end user and the value actually being delivered. End-to-end responsibility not only improves technical quality but also strengthens the focus on user value, avoiding unnecessary investments in local optimizations that don’t contribute to the overall goal.

In this work environment, more generalist profiles—often referred to as "T" (deep in one area and broad in others) or "M" (with several deep specializations and wide versatility)—are especially valued. These professionals, being able to contribute in multiple phases of development and having a more holistic view, are key in contexts where flow efficiency is a priority.

While pure specialists, also known as "I" profiles, are still needed in very specific domains, their role changes: they become enablers and educators, helping to scale their knowledge and train the rest of the team in various facets of the product.

In addition, keeping options open, working in short cycles, and making small deliveries allows for better adaptation to what the business and data reveal. Simplicity and vertical slicing reduce risk and facilitate rapid learning, freeing the team from the mental load of large, fixed plans and fostering autonomy.

This way of working is unfeasible when the product must pass through different silos before reaching the end user.

End-to-End Teams in Practice: Real Experiences

When I formed the team at Alea Soluciones, from the very beginning I pushed for us to take on all possible tasks and functions. I remember both the CEO and the CTO offering me the option to take on fewer responsibilities “to have less pressure and less work.” But I refused that proposal: I knew that if we only kept part of the scope, we would lose the global vision and the ability to optimize the entire system.

By taking on all areas—development, support, product ideas, operations—we always maintained a holistic view of the product. This allowed us to identify at any given moment where the real bottleneck was and act where we could have the greatest impact, without depending on other teams or functional silos. This meant we could decide whether, at a given moment, it made more sense to focus on support, to speed up key developments, or to think about new features—always optimizing the total flow of value to the user.

At Nextail and Clarity AI, together with others, I worked to evolve teams toward an end-to-end model, avoiding QA, operations, or product silos. In these cases, we applied ideas from Team Topologies to transform the structure: we moved from a function-based organization (infrastructure/operations, frontend, backend, data engineering, product) to a matrix of autonomous product teams.

The goal was always the same: to shorten the lead time of any change, from the idea to when the user can actually use it. With autonomous, end-to-end responsible teams, we could deliver value faster, learn continuously, and improve the product in very short cycles.

In all these contexts, I’ve found that the end-to-end approach has not only been a technical or organizational improvement—it has been the key to maintaining a user-centered mindset, reducing waste, and optimizing the whole, every day.


How Optimizing the Whole Helps Eliminate Waste

One of the fundamental principles of Lean is eliminating waste. When we focus only on optimizing local parts, it’s very easy to accumulate work that doesn’t add value, create unnecessary features, or generate invisible delays. In contrast, by optimizing the whole and looking at the complete system, we naturally reduce multiple types of waste.

In my work with different teams, I’ve observed that when we don’t think about the global flow and focus on optimizing “my part,” incentives appear to produce more features even if they don’t provide real value to the user. The priority becomes “staying busy” and completing tasks, rather than questioning whether they are necessary.

I’ve seen how, without early feedback and working in long cycles, teams advance without validating and end up building features nobody asked for or that don’t solve any important problem.

Moreover, when decisions are fragmented (product defines large packages, engineering executes without questioning, and QA validates afterward), the vision of impact is lost, and the backlog tends to swell with features that remain blocked behind a feature toggle or are never released at all.

By optimizing the whole and aligning each step with the complete flow of value to the user, each feature is reviewed with key questions:
  • Does it solve a real, high-priority problem?
  • Can we release it in a small version to learn quickly?
  • How will we know if it truly adds value?
In this way, we build only what’s necessary, learn early, and avoid turning effort into waste.
  • Avoid unnecessary features (overproduction): By prioritizing real user value and working in small deliveries, hypotheses are validated quickly. This avoids building large features that nobody uses or that the business decides not to launch.
  • Reduce waits and blockages (waiting times): By eliminating silos and working end-to-end, work doesn’t get stuck waiting for another team or function to pick it up. This speeds up flow and eliminates idle time.
  • Less rework and late fixes: Delivering in short cycles and validating early allows problems to be detected quickly and corrected at low cost. Otherwise, small local decisions can lead to large refactorings later.
  • Avoid useless local optimizations (unnecessary movement): Optimizing your own “department” or backlog can create a false sense of efficiency, but doesn’t generate value if it doesn’t move the complete flow forward. Looking at the global system avoids this kind of waste.
  • Reduce hidden inventory: Limiting work in progress and prioritizing constant flow minimizes the inventory of half-built or unreleased features, which consume energy and create confusion.
  • Lower opportunity waste: By having a clear view of the whole and being aligned with the business, we avoid investing in the wrong directions and respond quickly to new opportunities. This reduces the risk of missing the right moment to impact the user.

In my experience, when we optimize the complete system, every decision is made with the flow of value to the end user in mind. This way, every line of code, every validation, and every deployment helps reduce waste and maximize impact.

How to Optimize the Whole: Lessons from Practice

  • End-to-end vision: From the business problem to the running software operated by the team itself. Without fragmenting or handing responsibility over to “others.”
  • Flow over utilization: We stop measuring how much each person works and start measuring how much value flows to the user.
  • Enabling practices: Pair programming, TDD, CI/CD, limiting WIP, and visualizing flow… These are key tools to keep the system healthy, adaptable, and ready to learn quickly.
  • Small deliveries and immediate feedback: Every delivery is a learning opportunity. Working in vertical slices helps prioritize what truly matters, encourages simplicity, and reduces the fear of making mistakes.
  • Collaboration and psychological safety: Transparency, trust, and shared responsibility. Encouraging questioning, proposing improvements, and experimenting without fear.
  • Conscious empowerment: Teams take on more decisions as they demonstrate capability, always aligned with the business and focused on real impact.

Why "Optimize the Whole" Matters

Optimizing the whole is crucial because it addresses a fundamental contradiction in many organizations: the pursuit of resource efficiency versus flow efficiency. Traditionally, incentives have pushed for each person, team, or stage of a process to be as “busy” as possible, aiming to maximize individual resource utilization. However, this obsession with local resource efficiency (making sure no one is idle) is often catastrophic for flow efficiency—that is, the speed and smoothness with which value moves from the initial idea to the hands of the end user.

When each component of the system focuses on its own optimization, bottlenecks, waiting queues, and handoffs arise, breaking the continuity of flow. Paradoxically, when “everyone is very busy,” it’s often a clear sign that there is a serious problem with the flow of value to the user. Work piles up, deliveries are delayed, and the organization is investing significant effort in activities that don’t quickly translate into real value.

By optimizing the whole, we achieve:
  • Avoiding invisible bottlenecks that block value to the user, by having a global view of the system.
  • Drastically reducing waste: unused features, endless waits, unnecessary rework, and the false sense of productivity.
  • Enabling faster learning and the ability to build only what truly matters, as the feedback cycle is accelerated.
The true goal is not for everyone to look busy, but for the flow of value to the user to be constant, predictable, and sustainable.

Maximum busyness, minimal progress. Optimizing the whole means avoiding this.


Conclusion: The Transformative Impact

After years of experimenting with teams that truly optimize the whole, I can say you never want to go back. Teams become more resilient, grow faster, and find deeper meaning in their work.
I’ve learned that optimizing the whole is not just a principle—it’s a way of working that transforms teams and individuals, and above all, maximizes real impact for the user.
Are you ready to start optimizing the whole in your team? The first step is to dare to look at the entire system and stop obsessing over local metrics.

Related Articles and Other References



Saturday, August 09, 2025

Rediscovering my joy of coding:

How AI Is Shaping My Journey as a Tech Leader

I’ve been near a keyboard since the mid-80s, but I only started programming professionally around 1997. Fast forward nearly 30 years, and much of my career has been spent leading teams and working closely with product—so I’ve been coding less directly.

But AI has changed that. Now I can jump back in as a team contributor (I prefer this term because software development is really a collaborative effort) while staying in my leadership role. I can explore new technologies, make quick tweaks, and run experiments without becoming a bottleneck for the team. The usual hurdles—like boilerplate code or ramping up on a new stack—have shrunk.

This has supercharged my learning and experimentation. I can run more tests, dive into new tech, and rediscover the joy of focusing on what truly adds value—without the tedious overhead that used to slow things down.

We’re at a unique moment: with AI as our companion, we can build more software, handle complexity, and reinvent how we work together.

To fellow tech leads: are you feeling this shift too? I’d love to hear if this transformation excites you as much as it does me. We’re in the midst of a profound change, and it’s up to us to shape this new chapter.

Sunday, August 03, 2025

IA y Lean Software Development: Reflexiones desde la experimentación

Explorando cómo la inteligencia artificial podría estar cambiando las reglas del juego en el desarrollo de software - ideas preliminares desde la trinchera

Una exploración en territorio inexplorado

Quiero ser transparente desde el inicio: lo que voy a compartir no son conclusiones definitivas ni principios probados. Son reflexiones abiertas que surgen de unos meses de experimentación personal intensa con IA aplicada al desarrollo de software, explorando sus posibilidades y tratando de entender cómo afecta esto a las prácticas Lean que habitualmente utilizamos.

Estos pensamientos no son conclusiones definitivas basadas en experiencia prolongada, sino reflexiones abiertas que me gustaría seguir experimentando y discutiendo con otros interesados en este fascinante tema. No hablo como alguien que ya tiene las respuestas, sino como alguien que está explorando preguntas fascinantes y que sospecha que estamos ante un cambio de paradigma que apenas empezamos a entender.

La paradoja fundamental: velocidad versus validación

Una idea central que estoy observando es que, aunque la inteligencia artificial nos permite trabajar más rápido, eso no implica que debamos ampliar automáticamente el alcance inicial de nuestras funcionalidades. Mi intuición me dice que debemos seguir entregando valor en pequeños incrementos, validar rápidamente y decidir en función del feedback real y no simplemente de la velocidad con que ahora podemos ejecutar tareas.

Pero aquí hay un matiz interesante que he empezado a considerar: en contextos de baja incertidumbre, donde tanto el valor como la implementación son claros y el equipo está muy seguro, podría tener sentido avanzar algo más antes de validar. Sin embargo, mis sensaciones me llevan a pensar que mantener la disciplina para evitar caer en diseño especulativo es fundamental, ya que aunque la IA lo facilite, puede poner en peligro la simplicidad y flexibilidad futura del sistema.

La crisis cognitiva que no vemos venir

Gráfico: Mientras la velocidad de desarrollo con IA crece exponencialmente, nuestra capacidad cognitiva humana permanece constante, creando una "zona de peligro" donde podemos crear complejidad más rápido de lo que podemos gestionarla.

Aquí sí que tengo una convicción que cada día se vuelve más clara: ahora deberíamos ser mucho más radicales a la hora de borrar y eliminar código y funcionalidades que no están generando el impacto esperado.

Lo que me muestra esta visualización es algo que siento visceralmente: tenemos que ser implacables para evitar que la complejidad nos devore, porque por mucha IA que tengamos, la capacidad cognitiva de los humanos no ha cambiado - tanto para gestionar la complejidad técnica como para que los usuarios gestionen el número creciente de aplicaciones y funcionalidades.

Estamos en ese punto crítico donde la línea azul (velocidad de IA) se cruza con la roja (nuestra capacidad), y mi intuición me dice que o desarrollamos disciplinas radicales ahora, o nos adentramos en esa zona roja donde creamos más complejidad de la que podemos manejar.

La paradoja del Lean amplificado

Pero aquí está el quid de la cuestión, y creo que esta tabla lo visualiza perfectamente:

Tabla: La IA elimina las restricciones naturales que nos mantenían disciplinados (al menos a algunos), creando la paradoja de que necesitamos recrear artificialmente esas restricciones a través de disciplina radical.

Esta visualización me parece que captura algo fundamental que estoy observando: la IA elimina las restricciones naturales que nos mantenían aplicando principios Lean. Antes, el alto coste de implementación naturalmente nos forzaba a hacer batches pequeños. Ahora tenemos que recrear esa disciplina artificialmente.

Por ejemplo, fíjate en el línea de "Small Batches": tradicionalmente la velocidad de desarrollo era la restricción natural que nos forzaba a validar temprano. Ahora, con IA, ese freno desaparece y corremos el riesgo de crecimiento inconsciente del scope. La contramedida no es técnica, es cultural: redefinir explícitamente qué significa "pequeño" en términos de carga cognitiva, no de tiempo.

Lo mismo pasa con YAGNI: antes el alto coste de implementación era una barrera natural contra el diseño especulativo. Ahora la IA "sugiere mejoras" y hace que el overengineering sea tentador y fácil. La respuesta es hacer YAGNI aún más explícito.

Esta es la paradoja que más me fascina: tenemos que volvernos más disciplinados justo cuando la tecnología nos lo pone más fácil.

A partir de esta intuición general, he identificado varios patrones específicos que me preocupan y algunas oportunidades que me emocionan. Son observaciones que surgen de mi experimentación diaria, algunas más claras que otras, pero todas me parecen lo suficientemente relevantes como para compartirlas y seguir explorándolas.

Sobre el scope y la complejidad

Cambio en el "tamaño por defecto" del trabajo

La IA facilita el desarrollo inmediato de funcionalidades o refactors, lo que puede llevarnos inconscientemente a aumentar su tamaño. El riesgo que percibo es perder la disciplina de small batch size clave para validación temprana.

Exploración en curso: Mi intuición sugiere redefinir explícitamente lo que significa "pequeño" en un contexto con IA, enfocado en tamaño cognitivo y no solo en tiempo de implementación. Una forma de conseguirlo es apoyándonos en prácticas como BDD/ATDD/TDD para limitar cada ciclo a un test o comportamiento externo validable.

Diseño especulativo amplificado

En varias ocasiones he tenido que deshacer trabajo hecho por la IA porque intenta hacer más de lo necesario. He observado que la IA carece de sensibilidad al diseño orientado a objetos y no tiene ningún tipo de consciencia sobre la complejidad que genera, creándola muy rápido hasta llegar a un punto del que no sabe salir y entra en bucle, arreglando una cosa y rompiendo otras.

Reflexión: Esto me sugiere reforzar prácticas deliberadas como TDD, walking skeletons o feature toggles estrictos.

Nuevo tipo de "overengineering"

Mi experiencia inicial sugiere que la facilidad que ofrece la IA puede llevar a añadir funcionalidades innecesarias. No es el overengineering clásico del arquitecto que diseña una catedral cuando necesitas una cabaña. Es más sutil: es añadir "solo una funcionalidad más" porque es fácil, es crear "solo una abstracción adicional" porque la IA puede generarla rápidamente.

Sensación clave: Reforzar el principio YAGNI de forma aún más explícita parece necesario.

Sobre el flujo de trabajo y las validaciones

Diferenciar trabajo visible vs. trabajo liberado

Mi experiencia me indica que el rápido desarrollo no debe confundir "listo para desplegar" con "listo para liberar". Mi sensación es que mantener clara la separación entre deployment y release sigue siendo fundamental.

También he desarrollado varias veces pequeñas funcionalidades que luego no se han usado. Aunque, siendo honesto, como tengo muy interiorizado eliminar desperdicio y coste basal, simplemente he borrado el código posteriormente.

Oportunidad que veo: La IA puede acelerar desarrollo mientras validamos con pruebas controladas como A/B testing.

Más trabajo en curso, pero con límites

Aunque la IA puede permitir más trabajo paralelo, mi intuición me dice que esto puede fragmentar la atención del equipo y complicar la integración. Es tentador tener tres o cuatro funcionalidades "en desarrollo" simultáneo porque la IA hace que avancen rápido.

Mi preferencia actual: Usar IA para reducir tiempo de ciclo por historia, priorizando feedback rápido, en lugar de paralelizar más trabajo.

Cambio en el tipo de errores que cometemos

Mis observaciones sugieren que con IA, los errores pueden propagarse rápidamente, generando complejidad innecesaria o decisiones superficiales. Una decisión superficial o un malentendido del problema puede materializarse en código funcional antes de que haya tenido tiempo de reflexionar sobre si es la dirección correcta.

Exploración: Mi intuición apunta hacia reforzar guardrails culturales y técnicos (tests, revisión de decisiones, principio de mínima solución viable).

Sobre la cultura y el aprendizaje

Impacto en la cultura y el aprendizaje

Siento que existe el riesgo de confiar excesivamente en IA, lo que podría reducir la reflexión colectiva. La capacidad cognitiva humana no ha cambiado, y seguimos siendo mejores enfocándonos en pocas cosas a la vez.

Intuición de trabajo: Pair programming asistido por IA, rotaciones de ownership y revisiones explícitas de decisiones de producto podrían contrarrestar este efecto.

Ideas que estoy explorando para gestionar estos riesgos

Después de identificar estos patrones, la pregunta natural es: ¿qué podemos hacer al respecto? Las siguientes son ideas que estoy explorando, algunas ya las he probado con resultados mixtos, otras son hipótesis de trabajo que me gustaría contrastar. Siendo honestos, estamos en una fase muy embrionaria de entender todo esto.

Disciplina en la Eliminación Radical Mi intuición sugiere introducir "Deletion Reviews" periódicas para eliminar activamente código sin impacto real. Sesiones específicas donde el objetivo principal sea identificar y borrar lo que no está generando valor.

"Sunset by Default" para experimentos La sensación es que podríamos necesitar una política explícita de caducidad automática para experimentos no validados. Si no demuestran valor en X tiempo, se eliminan automáticamente, sin excepciones.

Tracking de Impacto más riguroso Mi experiencia me lleva a pensar en definir criterios explícitos de impacto antes de escribir código y eliminar despiadadamente lo que no cumpla expectativas en el tiempo establecido.

Fomentar la Mentalidad de "Disposable Software" Mi sensación es que etiquetar explícitamente funcionalidades como "disposable" desde el inicio podría facilitar psicológicamente la eliminación si no cumplen expectativas.

Reducción continua de "Legacy generado por IA" Siento que podrían ser valiosas las sesiones regulares para revisar código generado automáticamente y eliminar complejidades innecesarias que la IA haya introducido sin que nos diéramos cuenta.

Reforzar radicalmente el Principio de "YAGNI" Mi intuición me dice que deberíamos integrar explícitamente preguntas críticas en revisiones para evitar diseño especulativo: "¿Realmente necesitamos esto ahora? ¿Qué evidencia tenemos de que será útil?"

Mayor rigor en Pair Programming Asistido por IA Mi experiencia inicial sugiere promover "Pair Programming híbrido" para asegurar reflexión suficiente y calidad estructural. Nunca dejar que la IA tome decisiones arquitectónicas sola.

Una oportunidad fascinante: Cross Cutting Concerns y el YAGNI reforzado

Más allá de gestionar los riesgos, he empezado a notar algo prometedor: la IA también parece abrir nuevas posibilidades para decisiones arquitectónicas y funcionales que tradicionalmente debían anticiparse desde el principio.

Me refiero específicamente a elementos como:

  • Internacionalización (i18n): ¿Realmente necesitamos diseñar para múltiples idiomas desde el día uno?
  • Observabilidad y monitorización: ¿Podemos empezar simple y añadir instrumentación después?
  • Cumplimiento normativo (compliance): ¿Es posible construir primero y adaptar regulaciones más tarde?
  • Escalabilidad horizontal y adaptación a arquitecturas distribuidas: ¿Podemos diferir estas decisiones hasta tener evidencia real de necesidad?

Mi sensación es que estas decisiones pueden posponerse deliberadamente y ser introducidas más tarde gracias a las capacidades de refactorización automática que parece brindar la IA. Esto podría fortalecer aún más nuestra capacidad de aplicar YAGNI y defer commitment.

Los guardrails que creo necesarios

Para que esto funcione, siento que necesitamos mantener ciertos guardrails técnicos:

  • Separación clara de responsabilidades: Para que los cambios posteriores no rompan todo
  • Pruebas automatizadas sólidas: Para refactorizar con confianza
  • Documentación explícita de decisiones pospuestas: Para no olvidar lo que diferimos
  • Uso de IA especializada para spikes arquitecturales: Para explorar opciones cuando llegue el momento

Pero insisto: esto son solo intuiciones que me gustaría validar colectivamente.

Hipótesis de trabajo que me encantaría contrastar

Después de estos meses de experimentación, estas son las hipótesis que han emergido y que me encantaría discutir y probar colectivamente:

1. Velocidad ≠ Amplitud

Hipótesis: Deberíamos usar la velocidad de la IA para validar más rápido, no para construir más grande.

2. YAGNI radical

Hipótesis: Si antes YAGNI era importante, ahora podría ser crítico. La facilidad de implementación no debería justificar la complejidad adicional.

3. Eliminación como disciplina central

Hipótesis: Tratar la eliminación de código como una práctica de desarrollo de primera clase, no como una actividad de mantenimiento.

4. Pair Programming híbrido

Hipótesis: Combinar la velocidad de la IA con la reflexión humana podría ser clave. Nunca dejar que la IA tome decisiones arquitectónicas sola.

5. Separación deployment/release reforzada

Hipótesis: Mantener esta separación más clara que nunca. La facilidad de implementación podría crear espejismos de "producto terminado".

6. Cross-cutting concerns diferidos

Hipótesis: Podemos posponer más decisiones arquitectónicas que antes, aprovechando las capacidades de refactorización de la IA.

Una invitación honesta al aprendizaje conjunto

En definitiva, estas son ideas y reflexiones iniciales, abiertas a discusión, experimentación y aprendizaje. Mi intuición me dice que la inteligencia artificial está cambiando radicalmente la forma en que desarrollamos producto y software, potenciando nuestras capacidades, pero sugiriéndome la necesidad de una disciplina aún mayor en validación, eliminación y simplificación radical del código.

Mi hipótesis más fuerte es esta: la IA amplifica tanto nuestras buenas como nuestras malas prácticas. Si tenemos disciplina para mantener pequeños batches, validar rápido y eliminar desperdicio, la IA podría hacernos extraordinariamente efectivos. Si no la tenemos, podría ayudarnos a crear desastres más rápido que nunca.

Pero esto es solo una corazonada que necesita validación.

¿Qué experiencias habéis tenido vosotros? ¿Habéis notado estos mismos patrones, o completamente diferentes? ¿Qué prácticas estáis probando? ¿Habéis notado estos mismos efectos en vuestros equipos? ¿Qué sensaciones os genera la integración de IA en vuestros procesos Lean?

Estamos en los primeros compases de entender todo esto. Necesitamos las perspectivas de toda la comunidad para navegar este cambio que intuyo puede ser de paradigma, pero que aún no comprendo del todo.

Continuemos la conversación. La única forma de avanzar es explorando juntos.

¿Te resuenan estas reflexiones? ¿Has notado patrones similares o completamente diferentes? Me encantaría conocer tu experiencia y seguir aprendiendo juntos en este territorio fascinante y aún inexplorado.