Sunday, March 16, 2025

Lean Software Delivery: Empowering the Team

From my experience, a truly empowered software development team is one that has the ability to impact the entire Value Stream from start to finish. This team, starting from an idea or the identification of a problem (even discovering the problem alongside its users), takes responsibility for delivering value to those customers. This value is not limited to creating new features but also includes protecting what has already been built, minimizing the cost of unnecessary developments, and fostering capabilities that help achieve business objectives efficiently and effectively. End-to-end responsibility involves evolving, operating, and continuously maintaining what they have built, taking full ownership of their development.


https://blackswanfarming.com/value-a-framework-for-thinking/

Value streams can be directed both to external customers and to other teams or departments within a company. At Alea Soluciones, my team was responsible for the value stream for the end customer (management/telephony/TV/internet software for telecommunications operators) as well as the value stream for tools that accelerated and assisted another department in charge of network deployments and physical network operations. At The Motion, our value stream was for the end user of our product. And both at Nextail and currently at Clarity AI, my teams (team topologies: platform teams) are responsible for internal Value Streams aimed at accelerating/enabling other teams responsible for Value Streams to our end customers (team topologies: Stream Aligned teams).

Therefore, this empowerment—from idea to delivery and maintenance of value—always takes place within a value stream, whether internal or external. In companies, there is usually a combination of both types of value streams.

As John Cutler aptly describes, these are the so-called Product Teams.


https://amplitude.com/blog/journey-to-product-teams-infographic


Product Engineers / Impact-Oriented Approach

The mindset and culture I have always tried to promote align with what now seems to be in vogue: the "Product Engineers" way of thinking. That is, forming teams that do not merely execute tasks assigned by a manager or the Product area, but instead become true opportunity seekers, problem solvers, and value generators. These teams work by identifying problems—or even discovering which problems need to be solved—then offering the best possible solution, progressively evolving it through user and system feedback. This is always done with the end customer in mind, considering both the short and long term. Focusing solely on the short term can lead to neglecting the ability to continuously deliver value through new evolutions of the delivered software.

To give some practical examples, at Alea Soluciones, the entire team rotated through customer interactions, support roles, and meetings with the CEO and department heads. We all got involved in understanding why we did things, what impact we were aiming for, and with that knowledge, we were able to develop solutions iteratively, adapting them to the desired impact. This allowed us to learn more about our business and continuously provide better solutions.

At Clarity AI, within the Platform teams, we have conducted numerous interviews with internal users, used surveys, collaborated with interested teams on joint developments, and implemented beta tester and champion programs when developing new internal products. We also rotate support responsibilities so that everyone remains connected to customer needs and usage.

How to Foster This Mindset

Ultimately, it’s about shaping the team’s culture—something that sounds easy when put into words but is, in my experience, one of the hardest things to achieve (enough to fill entire books).

As we’ve been exploring in this series of articles, Lean Software Development is geared toward fostering this kind of culture—one that maximizes value (from the customer’s perspective), minimizes waste, and naturally aligns with customer needs.

What has worked for me in the past to help build this type of culture includes:

  • Prioritizing it in hiring, by asking specific questions about product impact and past contributions. I try to assess whether candidates see software as a means to an end or as the end itself.
  • Evaluating user empathy and a focus on quality and testing in the hiring process, as I believe these aspects are closely tied to the mindset of delivering sustainable value.
  • Ensuring a critical mass of team members who already have this mindset so that it can spread. In some cases, this has meant temporarily bringing in experienced individuals or hiring external experts to help establish that foundation.
  • Providing training on vertical slicing, product topics, and Domain-Driven Design (DDD), emphasizing ubiquitous language, domains, etc.
  • Technical coaching, training, and collaborations with professionals who embody this mindset (I’ve been fortunate to do this with Carlos Ble, Modesto San Juan, Alfredo Casado, Xavi Gost, Aitor Sanz, Codium, etc.). I also keep certain companies on my radar—those I know can bring this mindset in case an opportunity for collaboration arises (Codium, 540, Leanmind, Code Sherpas, Codesai, tech93, etc.).
  • And finally, what some consider my superpower: Being an absolute pain in the ass—constantly repeating the same message about user focus, sustainable quality, impact-driven development, and so on. 😆

Empowered Teams, Balance, and Trust

For a team to be truly empowered, it must have the ability to decide what to work on and how to approach it—while always staying aligned with the organization’s needs. This means properly balancing short-, medium-, and long-term priorities, as well as finding the right mix between behavioral changes (new features) and structural changes (design improvements or architectural changes).

If the team lacks this decision-making power and an external entity dictates what they should work on at all times—without real discussion—several common problems tend to arise:
  • An imbalance in priorities, which over time slows the team down.
  • The accumulation of technical debt or a decline in code quality.
  • The addition of new features without the necessary evolution of the system’s architecture.
https://x.com/johncutlefish/status/1622093852969680896

How to Enable Teams to Make These Decisions

In my experience, ensuring that the team can take ownership of these decisions requires:
  • Earning the organization’s trust by working responsibly and staying aligned with business needs.
  • Delivering quickly and in small steps, making it much easier to continuously blend small behavioral changes with structural improvements. This ensures a continuous flow of user value while maintaining the system’s evolution and sustainability.
Following Lean Software Development and Extreme Programming (XP), which naturally guide us toward this way of working—small steps, maximizing value.

Ultimately, Lean Software Development aims to create positive feedback loops, where:
  • Small, incremental impacts build trust within the organization.
  • That trust leads to greater autonomy for the team.
  • Autonomy enables better decision-making, driving continuous improvement (Kaizen).
If the team is not initially truly empowered, it is up to us to analyze the situation and propose or implement small changes to improve it. The most crucial aspect of this process is earning the organization’s trust, as this is what accelerates the shift toward empowerment. After all, no one will grant more autonomy unless the team consistently demonstrates, step by step, that it is responsible and systematically strives to achieve a positive impact.

Just as Lean creates a positive reinforcement loop, where:
  • We gain more autonomy, which allows us to have a greater impact, and
  • That impact generates even more autonomy for the team,
there is also a negative reinforcement loop:
  • If the team fails to deliver frequently or with the necessary quality, the organization trusts it less.
  • That lack of trust reduces the team’s ability to make decisions, limiting its autonomy.
  • In turn, this reduces opportunities to improve delivery frequency and quality, perpetuating the negative cycle.

Empowerment and Architecture


As we’ve discussed, an empowered team is one that has end-to-end responsibility, from idea to production deployment and operation of the solutions they develop. For this to happen effectively, the following conditions must be met:
  • Minimizing (or eliminating) dependencies on other teams. When dependencies do exist, there should be clear interfaces and rules to coordinate work, minimizing bottlenecks.
  • Working in an environment and on a platform that allows the team to autonomously deploy and operate their solutions in production.
Of course, we all know that these conditions are not simply given to us—we must continuously and consciously invest effort in:
  • Eliminating dependencies with other teams and designing our solutions to avoid creating new ones.
  • Continuously improving and strengthening our development, deployment, and operations platform. This can be seen as making our "workplace" habitable, recognizing that without ongoing investment, it will inevitably degrade.

Concrete Practices That Have Worked for Me in Different Teams

  • Maintaining a sustainable pace (Extreme Programming) and allowing time for continuous improvement and experimentation. It’s critical to avoid operating at full capacity all the time, ensuring enough slack to address internal improvements and explore new ideas.
  • Applying "Bounded Contexts" from Domain-Driven Design (DDD) to structure architecture based on business/domain criteria rather than technical components.
  • Avoiding separate QA teams and assigning quality responsibility to the development team itself. While specialized quality profiles may exist, ownership of quality should be shared by the entire team.
  • Embracing the "You build it, You run it" principle, eliminating the need for a separate operations team. This encourages the team to take responsibility not only for building the solution but also for running it in production.
  • Investing in observability and monitoring, both from a product perspective and a technical standpoint, to gain a better understanding of performance and potential issues.
  • Absorbing work from other teams when it helps eliminate dependencies. This should be done strategically, considering knowledge requirements and workload, but in many cases, it’s more efficient to acquire the necessary expertise and automate certain tasks rather than remain dependent on another team.
  • Working in small increments (vertical slicing, deployment pipeline automation/optimization, exhaustive testing, etc.) to enable continuous delivery and reduce risk with each change.

Non-Empowered Teams

In some cases, the biggest obstacle to team empowerment is the organization itself or the company culture. The most common scenarios I’ve encountered include:
  • Division by specialties and functions: When the organization is structured into separate areas such as frontend, backend, operations, QA, etc., making collaboration and team empowerment difficult.
  • Teams as "feature factories": Even when cross-functional teams exist to some extent, they are often treated as mere executors of solutions decided by Product or Business, with no real decision-making autonomy.
  • Cross-functional teams with technical responsibilities: Sometimes, organizations attempt to create cross-functional teams, but their responsibilities are assigned based on technical components rather than business domains. This means that delivering any functionality requires coordination across multiple teams, limiting agility and autonomy.
If these approaches define the status quo in our organization, there’s no alternative: change is necessary. Fortunately, our industry already offers extensive material on team organization, the pros and cons of different structures, the relationship between software architecture and team organization, and team management. Some valuable concepts in this area include Team Topologies, strategic patterns from Domain-Driven Design (DDD), Conway’s Law, Kotter’s change model, Systems Thinking, and socio-technical systems.

From experience, I can tell you that waiting for changes to happen magically (spoiler: they won’t) is a bad strategy. When I’ve seen the need for change, I’ve taken the initiative to drive it myself. You’d be surprised how much can be done without asking for permission.

Examples and Experiences

Alea Soluciones

At Alea Soluciones, we had almost complete autonomy—not only in deciding what to work on but also in how to implement our solutions. We coordinated directly with the CEO and key department heads, and all product and organizational decisions were made by our team. Our starting point was always the company's overall strategy, integrating continuous feedback from both internal and external customers. Additionally, our working methodology was entirely our own, evolving constantly through a process of continuous improvement. We even had the freedom to make hiring decisions, as long as we stayed within the agreed budget.

This autonomy and empowerment were deliberately pursued, and to achieve it, we focused on:
  • Gradually earning the company's trust by delivering high-quality solutions with a significant impact.
  • Taking on work from other departments to eliminate dependencies.
  • Encouraging multidisciplinary skills within the team to avoid silos.


The Motion

At The Motion, while we had a high level of empowerment, initially, the product and design/UX functions were separate and worked several months ahead of the rest of the team. This model, focused mainly on backend-driven video ad generation, worked well at first and didn’t create friction. However, as the product evolved and required configuration and administration interfaces, this gap started to cause problems.

As the organization evolved, we brought in team members with frontend and layout skills who also had a strong sense of product development and visual design. We also added a designer with experience in web layout, which facilitated a gradual design evolution. In most cases, we worked in pairing between the designer and a frontend developer, allowing us to progress in small increments and eliminate the initial delays. This shift gave us greater autonomy and better synchronization between design and development.

Clarity AI

The teams I’ve built at Clarity AI were designed from the start with end-to-end responsibility, allowing me to enjoy the benefits of empowered teams from day one.

When I joined in 2020, there was already a transition underway to create cross-functional teams, although reporting lines were still structured by specialization (frontend, backend, data engineering, and cloud/systems). This shift toward cross-functional teams aligned with my vision, so I joined the tech leadership team to accelerate the transformation using Team Topologies principles. Over the following months, we adjusted reporting structures, redefined guild functions to operate as communities of practice, and adapted the engineering career framework to fit the new structure.

This new way of working enabled us to support rapid growth, creating both stream-aligned cross-functional teams and platform teams.

Today, after much effort, the teams are highly autonomous, with near-complete end-to-end responsibility. In Clarity AI, some teams require specialized knowledge of sustainability, and we’re working on integrating this expertise in the best way possible to further enhance their empowerment.

Additionally, we've embedded key engineering principles that reinforce end-to-end responsibility within teams:
  • "You Build It, You Run It"
  • "Squad Autonomy Over Standardization"
  • "Flow-Driven Product Delivery"

Applying Lean Principles


Empowering teams is not just an effective organizational strategy—it embodies the core of Lean principles. By reducing unnecessary approvals and handoffs, we eliminate waste, allowing the team to focus on what truly matters and accelerate their work without bureaucratic obstacles. This approach reinforces continuous learning, as the team can experiment and adapt quickly, developing a deep and shared understanding of their own processes and outcomes.

At the same time, autonomy enables faster decision-making, removing bottlenecks that typically slow down development and allowing us to deliver value more efficiently. Quality becomes a shared responsibility, embedded within the team, giving them the confidence to uphold high standards without relying on external reviews.

By applying these Lean principles, we create a more efficient, adaptable, and innovative software development environment. This not only speeds up value delivery but also naturally aligns with small, incremental steps, making continuous improvement and technical excellence an inherent part of the team’s daily work.


References and related links

Special Thanks:

This article has been improved with feedback from:

Sunday, March 09, 2025

Lean Software Development: Deliver as Fast as Possible

One of the fundamental principles of Lean Software Development is to deliver value as quickly as possible. However, this does not simply mean increasing the pace of work but rather optimizing the flow of value—delivering frequently, obtaining feedback, and using it to constantly adapt, thereby improving efficiency in value delivery. 

Let's explore how to achieve this by adjusting processes to move efficiently and confidently. 

Speed and Direction: It’s Not Just About Developing Faster

Being more efficient is not about building more (see The Building Fallacy) or always staying busy (resource efficiency, see The Resource Utilization Trap). Speed, in physical terms, has a direction, and moving fast in the wrong direction is the greatest waste we can have (see Eliminating Waste in Development). To move in the right direction, we need: 
  • Continuous Deployment: To obtain frequent feedback.
  • Humility: To accept that we do not always know what the customer needs or what the best technical solution is.
  • Adaptation: Constantly adjusting our technical and product direction based on the feedback we receive.
Developing quickly without acting or deciding based on feedback is like running in circles. We exhaust ourselves but make no real progress. In fact, it’s even worse because we accumulate Basal Cost without any benefit (see The Basal Cost of Software).

How Fast is Fast?

From a practical perspective, moving fast means:
This allows us to deploy multiple times a day. Deploying this frequently requires doing so on demand, safely, and quickly. In an ideal world, these deliveries would be directly to the end customer. However, depending on the context, direct deployment to customers may not always be possible, or it may only be feasible for certain users (e.g., in embedded systems development or operating system components). These cases should be the exception (see Jocelyn Goldfein’s Model).

About 15 years ago, at Alea Soluciones, we deployed to production between three and five times per week, which was reasonable considering we were working with systems installed in our customers' data centers. Later, at companies like The Motion, Nextail, and ClarityAI—cloud-based multi-tenant environments—the teams I worked with achieved multiple deployments per day.

In all these cases, we reached this delivery speed using Trunk-Based Development, TDD, Continuous Integration, and pairing/mob programming, applying practices from eXtreme Programming (XP).

Control and Speed: The Right Balance

As Canadian Formula 1 driver Gilles Villeneuve said: "If everything seems under control, you’re not going fast enough." ;)



Having everything under control may seem ideal, but in reality, it’s a sign that you could be moving faster. How much faster? Until you start noticing some failures. This idea is key when developing software at high speed.

In my experience, when working with TBD, TDD, and Continuous Delivery, confidence in the process grows, and you start accelerating—delivering faster and faster. Inevitably, a mistake will happen. However, by working in small increments, the risk is low, and fixing the issue is usually quick. After correcting the error, it’s common for the team to slightly reduce speed.

This cycle of accelerating, learning, and adjusting is normal and, in my opinion, is a sign of a healthy team that continuously improves while maintaining a high delivery pace.

The Need for Robust Deployment Pipelines

In Lean, the flow of value begins with an idea or experiment we want to implement or validate and ends when we have delivered the increment, obtained feedback, and are ready to evaluate and decide on the next steps.

While Lean Manufacturing promotes reducing variability to standardize and optimize flow, in Lean Product Development or Lean Software Development, not all phases require low variability. For example:
  • At the beginning of the value stream, experimentation and creativity are needed to generate diverse ideas and approaches.
  • At the end of the cycle, we analyze feedback to decide what adjustments to make.
However, in the systematic process of deploying to production, we must minimize variability. This phase should be fast, solid, and reliable—something we can trust completely.

The goal is for deployments to be boring, a routine task that runs on autopilot.

To achieve this, deployment pipelines should have the following characteristics:
  • Fast (<10 minutes) and reliable.
  • Provide a high level of confidence in the solution through good automated tests.
  • No flaky tests—unstable tests should not exist.
This is only possible if we treat deployment pipelines as first-class citizens in our system, developing and maintaining them with the same level of quality as the rest of the system.

Continuous Deployment: The Most Efficient Method

Ideally, the changes we make should be deployed to production individually, one after another. This allows us to obtain feedback quickly and deliver continuous value increments. As mentioned in this article and previous ones, this way of working provides major benefits:
  • We continuously adjust course based on feedback.
  • Mistakes have a low impact since they involve small changes and are easy to roll back.
  • Understanding errors is simpler because the changes causing them are more limited in scope.
However, deploying is not free—it has an associated cost. It requires investment in:
  • Developing automated deployment pipelines.
  • Wait time every time we run the pipeline.
This cost is called the “transaction cost,” and when it is high, it limits how fast we can deliver.

Keeping Transaction Costs Low

Lean Software Development recommends keeping transaction costs as low as possible. To achieve this, it is essential to:
  • Invest in automated deployment pipelines.
  • Use automated tests that allow us to go directly to production if they pass.
This provides the confidence needed to deploy without manual intervention and ensures quality in every production release.

In the teams I have worked with, our default approach is Continuous Deployment. This means that any change committed to the main branch goes directly to production. To achieve this, we separate deployment from release using feature toggles, allowing us to roll out new features in a controlled manner, without making them immediately visible to end users.



Additionally, we have kept our deployment pipeline execution times between 10 and 15 minutes, ensuring that the delivery cycle remains agile and does not disrupt the workflow. If something fails, we typically implement an automatic rollback when post-deployment smoke tests detect an issue. This minimizes impact and ensures we can quickly regain control.



As Jez Humble, Joanne Molesky, and Barry O'Reilly highlight in Lean Enterprise:
"The goal of continuous delivery is to make it safe and economic to work in small batches. This in turn leads to shorter lead times, higher quality, and lower costs."
This approach allows us to confidently make small changes, accelerate delivery times, and reduce risk and error costs.

Conclusions

To deliver value quickly in Lean Software Development, it is crucial to optimize the workflow without compromising quality. Speed does not mean working faster without control but rather continuously adjusting based on feedback, allowing for constant adaptation. Teams following this philosophy achieve continuous and secure deliveries, minimizing risks and maximizing value.

Key Principles:
  • Small, frequent steps: Delivering in small increments reduces risk and makes error correction easier.
  • Low transaction costs: Investing in fast, reliable deployment pipelines maintains speed without affecting quality.
  • Continuous feedback: Rapid delivery enables constant feedback, improving and adapting the product.
  • Confidence in quality: With automated testing and a solid delivery flow, teams can be sure that each iteration releases a robust product.
In summary, this approach not only optimizes value delivery but also allows teams to operate sustainably, with fast adjustments and minimal error impact.

Related Resources

Sunday, March 02, 2025

Good talks/podcasts (March I)

These are the best podcasts/talks I've seen/listened to recently:
  • Acceptance Testing Is the FUTURE of Programming (Dave Farley) [AI, Generative AI, testing] [Duration: 00:16] (⭐⭐⭐⭐⭐) This talk explores a radical idea that acceptance testing, rather than being just a way to verify software, may be the next step in the evolution of how we program computers, using AI to generate code from detailed specifications
  • 5 Things That Waste Time & Money On A Software Project (Dave Farley) [Agile, Teams, testing] [Duration: 00:15] (⭐⭐⭐⭐⭐) This interesting video identifies common pitfalls in software development, such as not building what users want, using big teams, delaying feedback, chasing features over quality, and relying on manual regression testing, and advises optimizing for learning, fast feedback, and automated testing to improve productivity and reduce waste
  • Marty Cagan - Transformed: Moving to the Product Operating Model at just product 2023 (Marty Cagan) [Generative AI, Product Strategy, Product Team] [Duration: 01:05] This presentation explains the product operating model with its three dimensions (how to build, how to solve problems, and how to decide which problems to solve), four competencies, five product concepts, and underlying principles to help companies transform and work like the best product companies
  • Kent Beck on why software development is an exercise in human relationships (Kent Beck, Jack Hannah) [Engineering Culture, Teams, leadership] [Duration: 00:50] (⭐⭐⭐⭐⭐) Kent Beck discusses how software development is fundamentally an exercise in human relationships, and how creating a safe, supportive environment (a "forest") leads to better results than a resource-scarce, high-pressure one (a "desert"). He also touches on remote work, generational teaching, and test-driven development. An interview full of interesting insights.
  • The Software Industry's Evolution, Complex Architecture & Problem-Solving At Scale | Michael Nygard In The Engineering Room Ep. 35 (Michael T. Nygard, Dave Farley) [Architecture, Data Engineering, Platform, Platform engineering] [Duration: 00:44] A discussion between two experts on incremental design and architecture, including data mesh solutions, managing complexity, platform engineering, and the challenges of balancing innovation with stability
  • DuckDB and Python: Ducks and Snakes living together (Alex Monahan) [Data Engineering, Data Science, Python] [Duration: 01:02] This Talk Python episode explores DuckDB, a fast, in-process analytical database, and its cloud companion MotherDuck, highlighting their capabilities and use cases in Python and data science workflows.
Reminder: All of these talks are interesting, even just listening to them.

The talks and podcasts that I have rated as five stars are also available on the following website:
Related:

Sunday, February 09, 2025

Good talks/podcasts (Feb I)

 

These are the best podcasts/talks I've seen/listened to recently:
  • Patterns of Effective Delivery (Dan North) [Agile, Engineering Culture, Inspirational] [Duration: 00:59] (⭐⭐⭐⭐⭐) This talk explores patterns of effective software delivery, emphasizing that delivery means solving problems, not just writing code, and focusing on optimizing for the right outcomes rather than just process
  • Dantotsu Radical Software Quality Improvement (Fabrice Bernhard) [Inspirational, Lean Software Development, Quality, testing] [Duration: 00:37] This presentation covers how to apply the Dantotsu method—rooted in Toyota’s manufacturing principles—to optimize software development and delivery. By focusing on visual management, team leadership, and systemic solutions, the approach minimizes defects, boosts efficiency, and fosters continuous improvement. Very good ideas on how to improve post-mortems or how to classify problems to improve quality more quickly.
  • Beyond Engineering: The Future of Platforms (Manuel Pais) [Flow, Platform as a product, leadership] [Duration: 00:21] This talk explores applying the "platform as a product" approach beyond engineering, emphasizing how to improve flow and reduce friction across an organization by providing internal services in a self-service manner, and by focusing on the needs of the organization's internal users
  • The Most Dangerous Phrase: SOLID, Scrum And Other Antiques (Dan North) [Agile, Engineering Culture, Management, Technical Practices] [Duration: 00:38] (⭐⭐⭐⭐⭐) This presentation challenges the idea of blindly following software development practices like SOLID and Scrum, urging a re-evaluation of their continued relevance in light of changing contexts and technology, advocating for a more fluid, context-driven approach that prioritizes outcomes, learning, and continuous improvement
  • How Shopify builds a high-intensity culture (Farhan Thawar, Lenny Rachitsky) [Engineering Culture, Product Engineer, leadership] [Duration: 01:40] (⭐⭐⭐⭐⭐) This talk explores how Shopify builds a high-intensity culture through principles like choosing the hard path, prioritizing intensity over hours, and valuing pair programming, while also emphasizing continuous learning, code deletion, and a unique approach to hiring.
Reminder: All of these talks are interesting, even just listening to them.

The talks and podcasts that I have rated as five stars are also available on the following website:
Related:

Wednesday, February 05, 2025

Bilbostack 2025: Mi crónica de la conferencia

El pasado 25 de enero asistí a Bilbostack 2025 en Bilbao, una de mis conferencias tecnológicas favoritas. A pesar del clima encapotado y con algo de lluvia —típico en Bilbao por estas fechas—, el ambiente fue inmejorable y siempre es un placer volver a mi tierra para visitar a la familia. 


Charlas

A continuación nos listo algunas notas de las charlas a las que asistí.


¿Y si “hacer lo correcto” no fuera lo correcto? (Jordi Martí)

Esta charla fue la que más me impactó de toda la Billbostack. Jordi lanzó un mensaje claro y necesario sobre las dinámicas de poder y la dificultad de lograr una verdadera inclusión en equipos técnicamente excelentes que siguen prácticas ágiles. Explicó cómo este modelo de trabajo, que consideramos ideal, puede, en realidad, dificultar la incorporación de personas con experiencias y puntos de vista diferentes, especialmente si no están acostumbradas a esta forma de trabajar y no les damos el espacio para aportar, aprender y equivocarse. 
A menudo, quienes llevamos tiempo en el sector defendemos con vehemencia estas metodologías, seguramente por miedo a volver a sufrir la angustia que teníamos al principio de nuestras carreras. Jordi propone complementar el haz la cosa correcta de la forma correcta con hazlo de forma sostenible para las personas. 
Nos invitó a reflexionar sobre cómo nosotros aprendimos mediante prueba y error, pero ahora muchas veces negamos ese mismo espacio a quienes llegan. Personalmente, me tomo muy en serio su mensaje. Entiendo que la base de todo debe ser, ante todo, el respeto a las personas, tal y como indica Lean, y que debemos generar un entorno donde puedan crecer, sentir que aportan e incluso equivocarse como parte natural del aprendizaje.
El equilibrio ideal es un equipo abierto e inclusivo, con un entorno seguro en el que la gente pueda experimentar, aprender y contribuir sin verse limitada por los dogmas preestablecidos. 

Esto en cuanto al contenido de la charla, pero, además, la forma de transmitirlo fue insuperable. No voy a hacer spoiler por aquí, porque tengo la esperanza de que la repita en algún otro evento. Solo puedo decir que yo, de mayor, quiero poder transmitir las ideas como Jordi. 

Fatal, gracias (Irene Morgado)

Continué con esta interesante charla de Irene, centrada en la toxicidad y los problemas que a menudo encontramos en las empresas, pero que no siempre somos capaces de identificar. Irene fue muy didáctica al explicarnos cómo funcionan ciertos procesos en nuestro cerebro, qué tipos de comportamientos o dinámicas solemos repetir y cómo podemos trabajar en hacernos más conscientes de lo que vivimos día a día en el trabajo. 
Todo ello con el objetivo de mejorar nuestras relaciones, nuestra mentalidad y, sobre todo, aprender a identificar problemas o toxicidades que muchas veces hemos normalizado y aceptamos sin cuestionar. Una charla muy valiosa para la reflexión y la toma de conciencia. 

Culture Driven Development, el motor de un equipo rápido, efectivo y sostenible (Sebi Collell y Censu Karayel)

Interesantísima charla donde Sebi y Censu describen la cultura Agile/Lean y el impacto que han conseguido en Tech93/Cooltra. Nos mostraron los valores y principios en los que basan su cultura y nos dieron muchísimas pistas sobre las prácticas que utilizan. Se centran mucho en el impacto (Outcome) de todas las iniciativas y, algo menos común pero que considero fundamental, es que han desarrollado una cultura muy sana de experimentación y, sobre todo, de eliminación de cualquier iniciativa superflua o que no esté alineada con el foco actual. Explicaron en detalle cómo es todo su proceso de producto, y no puedo más que recomendar a todo el mundo que intente enterarse y aprender cómo están haciendo las cosas. 
Su enfoque es muy Lean, tanto en producto como en desarrollo, minimizando el desperdicio y manteniendo un foco constante en el impacto de negocio. Especialmente valiosa me pareció la práctica del framing, donde acotan y definen el problema, estableciendo el impacto esperado en el negocio. También destacaron los diferentes puntos en el proceso donde pueden detener iniciativas para evitar el desperdicio de trabajar en algo innecesario. 
Si algo puedo decir de esta charla es que me quedé con ganas de más, pero el tiempo es limitado. Así que solo espero que sigan compartiendo tan generosamente sus aprendizajes y su forma de trabajar con la comunidad. 
Como complemento a esta charla, os dejo por aquí algunas otras donde se puede ver, desde distintos puntos de vista, la manera en que trabajan: 

Cuando los robots aprenden a hablar: Historias de fábricas inteligentes y data streams (Fernando Díaz)

Por último, pude disfrutar de la charla de Fernando Díaz sobre lo que ha aprendido como desarrollador de software tras adentrarse, en el último año y medio, en un nuevo producto cuyo núcleo es el procesamiento de datos en streaming y en tiempo real. En este caso, la charla no fue una sorpresa para mí, ya que había hablado previamente con Fernando sobre el tema. De hecho, él ya me había compartido la idea de la charla y las diapositivas. Además, es un tema que me toca de cerca, porque en Clarity AI, además de los equipos a los que ayudaba antes, ahora también me estoy encargando de la plataforma de datos, por lo que empatizo totalmente con con las complejidades y desafios que se ha encontrado viniendo de un background de software engineer.
La charla me gustó mucho por dos motivos. Primero, porque, habiendo trabajado en sistemas industriales con información en tiempo real, creo que este es un entorno poco conocido para muchas personas en la profesión, pero al mismo tiempo, muy interesante y divertido. Y segundo, porque Fernando describió magistralmente las complejidades específicas del mundo del dato en tiempo real. Cuando vienes de un background como software developer en sistema de información en muchos casos con interfaces web, el mundo de dato te presenta otras complejidades nuevas a las que tienes que adaptarte. 



Creo que Fernando reflejó muy bien cómo algo que parece tan sencillo, como un pequeño cálculo —por ejemplo, una resta entre dos cantidades—, se vuelve considerablemente más complejo en el mundo de los datos en tiempo real. A este tipo de problemas se suman múltiples dimensiones de dificultad a las que no estamos acostumbrados, como datos desordenados, dificultad para correlacionarlos, problemas de escalabilidad, latencias, estado efímero, entre otros. 

Postcharlas (Networking)

Si las charlas de la BilboStack son muy potentes, creo que uno de sus grandes puntos fuertes es el networking posterior. Al final, se trata de una conferencia en la que solo hay ponencias por la mañana, y el resto del día—e incluso en muchos casos la noche—se dedica a hacer networking con el resto de los asistentes. Tal como en los últimos años, los organizadores aprovecharon la explanada de la parte de atrás del Palacio Euskalduna, junto a la entrada del Museo Marítimo para preparar una zona en la que poder disfrutar de comer y beber algo mientras compartes experiencias con el resto de asistentes. 
A diferencia del año pasado, en esta ocasión tuvimos un día típico de Bilbao: bastante nublado, con algún chubasco intermitente, pero que no nos impidió disfrutar de todo lo que los organizadores habían preparado.



Hubo txosnas, música en directo, aizkolaris  y, sobre todo, muchas risas y mucha conversación. Como otros años, disfruté muchísimo de todos los reencuentro y las conversaciones. Me quedé con ganas de hablar más tiempo con algunas personas, pero el tiempo es limitado.
Podría listar aquí a toda la gente con la que hablé y los temas que tratamos, pero recuerdo en especial una conversación intensa con parte del equipo de Tech93 y Cooltra (Xavi Ghost, Alex Fernández y Javier Salinas entre otros) sobre cómo trabajaban, el enfoque de desarrollo de producto lean y el proceso que seguian. Profundizamos bastante en cómo usar el coste de retraso para calcular el impacto de las iniciativas, viendo estrategias para como tratar los distintos perfiles de urgencia del coste de retraso
Me lleve un montón de nuevas ideas y temas para profundizar.

Lo dicho, como siempre, el networking de la BilboStack es insuperable. Aprendes, compartes, te ríes… no sé qué más se puede pedir. 

Conclusiones 

Un año más, Bilbostack cumplió (e incluso superó) mis expectativas. La mezcla de charlas de calidad, un ambiente cercano y la oportunidad de disfrutar de un fin de semana en Bilbao —visitando a mi familia— hacen de esta conferencia una cita imprescindible. 
Recomiendo a cualquiera que aún no haya asistido que se anime en la próxima edición: no se arrepentirá. 

No sé si lo habré conseguido pero si con este post no te dejo con ganas de apuntarte el año que viene a ir a la Bilbo stack no sé qué más puedo hacer. :)

Monday, January 13, 2025

Developing Software: Postponing Decisions and Working in Small Steps

Translated from the original article in Spanish Desarrollando software: posponiendo decisiones y trabajando en pasos pequeños

In this article of the series on Lean Software Development, after exploring practices for postponing decisions in the product, today we will discuss how to develop software by taking very small steps, delaying decisions, and doing only what is necessary at each moment.

This approach aligns with the principles of Lean Software Development and eXtreme Programming (XP), being a key part of agile development.

Why Work in Small Steps

Working in small steps is essential in uncertain environments. Neither we nor the client always know exactly what is needed to achieve the desired impact. By progressing in small increments, we obtain valuable feedback both from the system—on its functionality and behavior—and from the client. This approach allows us to learn and constantly adjust, avoiding premature decisions that could limit our options or be difficult to reverse.

It is a continuous learning process where we avoid speculative design and unnecessary features. By moving forward step by step, we accept that we do not know everything from the outset and choose to experiment and validate constantly.

Benefits of Working in Small Steps

Working in small steps with continuous feedback offers numerous benefits. Geepaw Hill, in his article "MMMSS: The Intrinsic Benefit of Steps," brilliantly describes the effects of this practice on teams. Below is a summary, though I recommend reading the full article or the series "Many More Much Smaller Steps."

Geepaw mentions eight benefits of working in steps of less than a couple of hours:

Benefits in Responsiveness:  

  • Interruptibility You can handle interruptions or change focus without breaking the workflow.  
  • Steerability After each small step, you can reflect, incorporate feedback, and adjust the direction if necessary.  
  • Reversibility If a step does not meet expectations, reverting it results in minimal time loss.  
  • Target Parallelism By advancing in consistent small steps, it is possible to work on different areas of the system or for different stakeholders without leaving tasks half-done.  

Human Benefits:

  • Cognitive Load: Forces you to reduce cognitive load by limiting the combinations and cases you must consider.  
  • Pace: Establishes a steady team rhythm with cycles of quick rewards (successful tests, commits, deployments, etc.).  
  • Safety: Small changes carry less risk than large ones. With frequent tests and daily deployments, the maximum risk is reverting the last change.  
  • Autonomy: Allows the team to make continuous decisions, requiring constant effort to understand and empathize with the user to address problems or implement improvements.  

Working in Small Steps and Postponing Decisions

Since around 2009-2010, I have tried to apply the practice of working in very small steps in all the teams I collaborate with. These steps usually take a few hours, allowing production deployments several times a day and achieving visible changes for the client in one or two days at most. This agile approach minimizes risk and maximizes responsiveness, but it requires discipline and the rigorous application of agile development practices proposed by eXtreme Programming (XP).

Practices and Tactics for Working in Small Steps

Below, I present some practices and strategies that enable us to work this way. Sometimes it’s hard to separate them, as they are closely interrelated and complement each other.

Iterative and Incremental Development

The most important technique we use is also the simplest and, at the same time, the least common. Instead of starting with a complete solution and dividing it into steps for implementation, we progressively grow the solution until it is “good enough,” allowing us to move on and invest in solving another problem. That is, we focus on delivering increments (to the end client) that align with the idea of the solution we are aiming for, all while keeping the solution and the problem in mind. We use feedback to ensure that we are heading in the right direction. Additionally, not being afraid to iterate based on this feedback allows us to work in small, low-risk steps.  


For example, starting from an initial problem with a potential solution, we generate increments (Inc 1, Inc 2, etc.) in less than a day. Each increment is delivered to the user for feedback, which helps us decide the next step and whether the solution is already good enough. This way, we avoid waste (gray area) by not doing unnecessary tasks, thus reducing the system's Basal Cost.

https://x.com/tottinge/status/1836737913842737382

Vertical Slicing

Vertical slicing involves dividing functionalities and solutions in a way that allows for an incremental approach to development, where each small increment provides value in itself. This value can manifest as user improvements, team learning, reduced uncertainty, among others. Instead of splitting stories by technical layers (infrastructure, backend, frontend), we divide them into increments that deliver value and typically require work across all layers.

In my teams, we apply this vertical slicing rigorously, ensuring that no increment takes more than two days, and preferably less than one day. We use various heuristics and processes for vertical slicing (https://www.humanizingwork.com/the-humanizing-work-guide-to-splitting-user-stories/), such as the “hamburger method” by Gojko Adzic, which I will describe later.  

Even though we use this vertical slicing to break down what we want to implement into increments, this doesn’t mean we always implement all the identified increments. On the contrary, the goal is always to grow the solution as little as possible to achieve the desired impact.

Technical Segmentation

As a complement to vertical slicing, in my teams, we also divide these increments that deliver value to the user into smaller tasks, which we also deploy to production. These tasks are more technically focused and usually take less than two or three hours.

Deploying these technical increments allows us to obtain feedback primarily from the system: Does our CI pipeline continue to work well? Does the deployed code cause any obvious problems? Does it affect performance in any way?

This practice forces us to maintain a low deployment cost (in terms of time and effort) and allows us to ensure that the workflow continues to operate correctly at all times. This is possible because we have a solid automated testing system, fast CI pipelines, and we work with Continuous Integration/Trunk-Based Development, as we will explain later.  

Being able to apply this technical segmentation is also essential for making parallel changes, implementing significant modifications in small and safe steps, and thereby significantly reducing risk.

Generating Options

Generating options is essential for making well-founded decisions. Every decision should consider multiple alternatives; we usually try to have at least three or four. To facilitate the generation of options, we can ask ourselves questions such as:  

  • What other options would you consider if you had half the time?  
  • Which options require new dependencies?  
  • What solutions have you implemented in similar problems in the past?  
  • What is the minimum degree of sophistication required for the solution?  
  • Who could benefit from the change? Could we deliver it to each user group independently?  

These questions help us generate options that the team can then evaluate, always trying to select those that quickly provide value (learning, capacity, uncertainty reduction, etc.) while committing as little as possible.  

This way of working allows us to move forward in small steps, always maintaining visibility over the different options we can take to continue addressing the problem or redirect it if the steps taken aren’t achieving the desired impact. As you can see, everything converges into working with small advances, learning, making decisions as late as possible, and striving for the simplest solutions.

One tool we use often for generating options and performing vertical slicing is the “hamburger method” by Gojko Adzic.  

With this method, we aim to divide a functionality or solution into the steps necessary to provide value to the user. These steps are visualized as “layers” of the hamburger, and for each layer, we force ourselves to generate at least three or four options. Then we select at least one option from each layer to decide which will be the first increment to implement. Once that first increment is implemented and delivered, and with user feedback in hand, we repeat the process to implement one of the other options.  

This continuous process doesn’t end when we implement all the identified options, but when the functionality is good enough, or there is another functionality or higher-priority problem to invest in. In other words, we invest in what’s most important until the user is satisfied or until a new priority arises. 


Simplicity 

Simplicity is one of the core values of XP (eXtreme Programming) and, by extension, of well-understood agility. A mantra of agile development is, “Do the simplest thing that could possibly work.” This means starting with the simplest, minimal solution that works, iterating, and improving based on feedback.

The simplest solution is not always the easiest to implement. Sometimes, avoiding unnecessary complexity requires significant effort. True simplicity is the result of conscious design that evolves gradually.

Two-Step Development 

Kent Beck advises us to “Do the simplest thing that could possibly work,” but this is often confused with “the first thing that comes to mind” or “the only thing I know how to do.” An effective way to ensure we are choosing the simplest option possible is to divide any change or increment into two parts:  
  1. Preparation: Adjust the current codebase so the new functionality can be introduced easily.  
  2. Implementation: Introduce the actual change.  
https://x.com/eferro/status/1810067147726508033

This separation avoids speculative design and ensures that only the minimum necessary changes are made to integrate the new functionality, following Kent Beck’s principle:  
“Make the change easy, then make the easy change.”



https://twitter.com/KentBeck/status/250733358307500032

YAGNI (You Aren't Gonna Need It)

Closely related to the above point, the YAGNI principle reminds us that many ideas we come up with will likely never be needed. It encourages us to focus only on what we need *now* and helps us avoid speculative design, keeping us focused on what is truly relevant at the moment.  

Even when we identify something that might be needed in the near future, YAGNI prompts us to question whether it is truly essential for current needs, reminding us to postpone it. If the system is simple and easy to evolve, it will be easy to introduce those changes later.

Test-Driven Development (TDD) and Outside-In TDD 

Test-Driven Development (TDD) is a practice that involves writing a test first to define the desired behavior of a functionality, before writing the code to implement it. From there, the developer writes the minimum code necessary to pass the test, followed by a refactoring process to improve the code design without changing its behavior. This cycle is repeated continuously, ensuring that every line of code has a clear and defined purpose, avoiding unnecessary or superfluous code.  

Outside-In TDD is a variation of TDD that starts from the broadest business use cases and works its way inward to the system's implementation. By starting from business needs and writing only the code necessary to pass each test at each level (from the highest functional level to the individual pieces of code), this approach ensures that only essential code is created. It prevents unnecessary code or features that are not currently required, avoiding speculative design and adhering to the YAGNI principle.

In our team, we use Outside-In TDD as the default workflow for all new code, except in cases where this flow isn’t beneficial (e.g., spikes, complex algorithms, etc.). This means that approximately 5-10% of the code may be experimental for learning purposes, which is discarded afterward and typically isn’t tested. Another 10% corresponds to tasks where tests are written afterward (e.g., library integrations or complex algorithms). The remaining majority of the code is developed using Outside-In TDD.  

This approach minimizes waste and inherently adheres to the YAGNI principle since no code or design is created that doesn’t align with the current increment. As the current increment is defined through radical vertical slicing, we work in small steps, with minimal waste, and make decisions as late as possible.

An additional advantage of this process is that it facilitates quick error resolution, both in code and design, as progress is constantly verified step by step. When an error is detected, it is most likely in the last test or the last change made, allowing for quick and stress-free recovery.

Continuous Integration (Trunk-Based Development)

If there is one technical practice that forces and helps us work in small steps, with constant feedback, enabling us to decide as late as possible while learning and adapting at maximum speed, it’s Continuous Integration (CI).  

First, it’s important to clarify that Continuous Integration is an XP practice in which all team members integrate their code into a main branch frequently (at least once a day). In other words, this practice is equivalent to Trunk-Based Development, where there is only one main branch on which all developers make changes (usually in pairs or teams).  

This practice has nothing to do with running automated tests on feature branches. In fact, I would say it is directly incompatible with working on separate branches for each functionality.  

Unfortunately, this approach is not the most common in the industry, but I can assure you that, along with TDD, it is one of the practices that has the most impact on teams. In every team I’ve worked with, the introduction of Continuous Integration/TBD has caused a spectacular change. It has forced us to work in very small (but safe) steps, giving us the agility and adaptability we sought.  

Of course, like any practice, it requires effort and the learning of tactics to frequently deploy to production without showing incomplete functionalities to the user. It’s necessary to master strategies that separate deployment (technical decision) from the release to users (business decision). The most common strategies are:  
  • Feature toggles: Allow features to be turned on or off, perform A/B testing, or show new features only to certain clients (internal, beta testers, etc.).  
  • Gradual deployment: Methods like canary releases or ring deployments allow for a progressive rollout of changes.  
  • Dark launches: Launch a feature without making it visible to the client, only to perform performance or compatibility tests.  
  • Shadow launches: Run a new algorithm or process in parallel with the old one, but without showing results to the end user.

Evolutionary Design  

This central XP practice allows us to develop software incrementally, continuously refactoring the design so it evolves according to business needs. In practice, it involves creating the simplest possible design that meets current requirements and then evolving it in small steps as we learn and add new functionalities.

Within evolutionary design, tactics include:  
  • Two-step development.
  • Continuous refactoring in the TDD cycle.  
  • Opportunistic refactoring.  
  • Avoiding premature abstractions ([See: https://www.eferro.net/2017/02/applying-dry-principle.html)).  
  • Parallel changes to keep tests green while making multi-step changes.  
  • Branch by abstraction and the Expand/Contract pattern to facilitate parallel changes.

It’s important to note that beyond the tactics you use to guide the design in small steps, it’s essential to develop a sense of design within the team. None of these practices alone teach object-oriented design. Therefore, the team must not only learn to make incremental design changes but also acquire a deep understanding of object-oriented design principles.

Differentiated Evolutionary Design

In general, in my teams, we always try to work in small steps, focusing on what we need at the moment and letting new needs guide changes in architecture and design. At the same time, we recognize that the ease of evolution and the friction generated by change depend heavily on the type of code being affected. We know that modifying code that implements business rules, an internal API between teams, or a customer-facing API are not the same in terms of friction.


Each of these cases involves varying degrees of friction to change (i.e., different levels of ease of evolution). Therefore, we apply a differentiated evolutionary design approach based on the type of code.  

For code with higher friction to change, such as a customer-facing API, we dedicate more time to a robust design that allows for evolution without requiring frequent changes. Conversely, for internal business logic code that is only used in specific cases, we adopt a more flexible evolutionary approach, allowing the design to emerge naturally from the development process.



Other Tactics and Practices

Of course, these are not the only tactics and practices to consider, but I do believe they are the ones that help us the most. Here are some additional tips and heuristics that, while not full-fledged practices in themselves, contribute to decision-making and generally make it easier to work in small steps and postpone decisions as much as possible:  

  • Prioritize libraries over frameworks to avoid locking in options and maintain greater flexibility.  
  • Focus on making code usable (and understandable) rather than reusable, unless your business is selling libraries or components to other developers.  
  • Use solid, “boring” technology that is widely accepted by the community.  
  • Create thin wrappers over external components/libraries to clearly define which parts of a component are being used and to facilitate testing. You can learn more about this approach at https://www.eferro.net/2023/04/thin-infrastructure-wrappers.html.  
  • Separate infrastructure from business code through Ports and Adapters or another architecture that clearly differentiates them.  
  • Apply evolutionary architecture, starting with a minimal architecture and adapting it to business needs, postponing hard-to-reverse decisions as much as possible.

Conclusions

In software development, the key lies in adopting a conscious approach to our decisions, working in small, low-risk steps, and focusing solely on what we need now. Simplicity and clarity must be priorities to maximize efficiency and minimize waste.  

The practices of eXtreme Programming (XP), together with the principles of Lean Software Development, provide us with a clear guide to avoid waste and over-engineering. Understanding that we cannot predict the future with certainty, we focus on building systems that are easy to understand and evolve, avoiding unnecessary complexity. Working this way means steering clear of oversized or highly configurable solutions, which often become obstacles to system evolution.  

Ultimately, it’s about being humble: acknowledging that we don’t have all the answers and that the only way to find the right solution is through experimentation and continuous learning. In short, simplicity, agility, and responsiveness are fundamental to developing software effectively in an ever-changing environment.  

If I had to choose the techniques and practices that have the greatest impact on my teams for working in small, safe steps and postponing decisions, I would say they are:  
  • Vertical slicing
  • Continuous Integration / Trunk-Based Development  
  • TDD (Test-Driven Development)  
All with a constant focus on simplicity.  

Each of the practices and tactics mentioned in this article is broad and could be explored in greater depth. I would love to know if there is interest in delving into any of them further, as it would be enriching to explore in greater detail those that are most useful or intriguing to readers.  

References