Tuesday, December 31, 2024

Unraveling My Thought Journey: Discoveries from a Multi-Year Twitter Log via LLM

These days, it's common to reflect on the past year, but instead of doing that, I decided to try something different: an experiment. As I'm transitioning from Twitter (yes, I don’t think I’ll ever call it X) to Bluesky, I took the opportunity to back up all my tweets since I joined Twitter in 2007. My goal? To experiment and see if I could uncover anything interesting using a large language model (LLM).

After a bit of cleanup, I organized my tweets into two files: one in chronological order and another sorted by engagement (likes + retweets). I then uploaded both files to Google's NotebookLM and started testing various prompts to see what insights I could extract.

I hypothesized that with tweets spanning back to 2007, it should be possible to trace the evolution of my thinking. To my delight, the results exceeded my expectations.

I’ll share the outcome of the experiment here:

My Thinking Evolution Based on My Tweets

Here is a summary of my thinking evolution, as reflected in my tweets:

Key Themes:

  • Technical Exploration: I am passionate about technology and have always enjoyed exploring new tools and techniques. My early tweets reflect this, as I often share links to interesting resources and discuss my experiences with various programming languages and technologies.
  • Python Enthusiast: I have a particular fondness for Python, which has been a constant throughout my tweets. I frequently share Python-related content and participate in Python communities.
  • Shift Towards Agile and Lean: Over time, my focus has expanded beyond technical skills to encompass broader software development principles. I have become increasingly interested in Agile methodologies, DevOps practices, and Lean Software Development. This is evident in my tweets, as I often share content related to these topics and discuss my experiences applying them.
  • Focus on Software Craftsmanship: I value writing high-quality, maintainable code. My tweets often reflect my interest in topics like TDD, refactoring, and clean code. I believe in continuous improvement and strive to become a better software developer.
  • Lifelong Learning: I am a firm believer in lifelong learning and am always eager to expand my knowledge. My tweets often feature links to interesting talks, podcasts, and books that I have found valuable.
  • Social and Personal Insights: While technology is a major focus, my tweets also offer glimpses into my personal life and social views. I occasionally comment on current events, share personal anecdotes, and reflect on work-life balance.


My Thinking Evolution:

  • Early Exploration (2009-2010): In the early days of my Twitter journey, my focus was primarily on exploring various technologies and tools. I was experimenting with different programming languages and approaches, seeking efficient solutions to technical challenges. My tweets from this period reveal a hands-on, problem-solving mindset.
  • Embracing Agile (2011-2013): As I gained more experience, I began to recognize the limitations of purely technical approaches to software development. I became increasingly drawn to Agile methodologies, seeing their potential to foster collaboration, iterative development, and value-driven delivery. My tweets from this period reflect my growing interest in Scrum, TDD, and related practices.
  • Expanding to DevOps and Lean (2014-Present): My understanding of software development continued to evolve, encompassing DevOps practices and Lean Software Development principles. I became fascinated by the idea of streamlining the entire software development lifecycle, from coding to deployment and operations. My tweets from this period show my enthusiasm for topics like continuous delivery, automation, flow, and minimizing waste.
  • Focus on Outcomes and Impact (Recent Years): In recent years, my perspective has matured to prioritize the outcomes and impact of software development over simply churning out features. I am more conscious of the need to deliver real value to users and businesses, minimize waste, and focus on building the right things. This shift in thinking is apparent in my tweets, which often emphasize the importance of user-centric design, data-driven decision-making, and measuring the impact of our work.


Overall, my thinking evolution, as documented in my tweets, demonstrates a journey from a technically focused programmer to a more well-rounded software development professional with a strong grasp of Agile, DevOps, and Lean principles. I am passionate about continuous learning, striving for excellence in my craft, and ultimately, delivering valuable and impactful software.



Monday, December 30, 2024

Conversaciones sobre Desarrollo, Producto e Impacto

Quiero compartir contigo una breve conversación en formato de preguntas y respuestas, donde repaso mi trayectoria profesional y reflexiono sobre la importancia de reducir el Costo Basal del software. Estas notas —preparadas antes de grabar el podcast "Tenemos que hablar de producto"— sintetizan mi visión sobre la industria y cómo intento generar un impacto positivo tanto en el negocio como en la vida de los usuarios. Espero que encuentres algo valioso en estas reflexiones, ya seas un desarrollador experimentado o alguien que está comenzando su camino con enfoque en producto.

1. Trayectoria Profesional

P: ¿Cómo iniciaste tu carrera en tecnología y qué te atrajo inicialmente del desarrollo de software?

R: Mis primeros pasos fueron a mediados de los 80 con una computadora ZX Spectrum. Desde el inicio me fascinó la idea de poder crear cosas a través de un lenguaje de programación. Decidí dedicarme a ello porque soy muy curioso y, creo, también bastante creativo; para mí, los ordenadores eran pura magia. Con el tiempo, tomé decisiones profesionales enfocadas en aprender y comprender cómo funcionan realmente las cosas: Linux, programación orientada a objetos, startups, la nube (cloud), escalabilidad técnica y organizacional, entre otros temas.

P: ¿Cuál dirías que ha sido la lección más importante que aprendiste al enfrentar desafíos en tu carrera?

R: Que lo más difícil siempre son las personas. Además, he aprendido que el mayor desperdicio en el desarrollo de software es hacer lo que no se necesita. Y, lo que es peor, no atreverse a eliminarlo después por miedo o por inercia. Esto impacta negativamente tanto en la calidad del producto como en el entorno de trabajo.


2. El Rol del Developer en Empresas de Producto

P: ¿Cuál es, en tu opinión, el rol esencial de un desarrollador en una empresa de producto?

R: Creo que el rol esencial de un desarrollador es resolver problemas o aportar valor al usuario de manera que también beneficie al negocio. Esto implica hacerlo con la menor cantidad de software y esfuerzos posibles, trabajando en pequeños incrementos y buscando retroalimentación constante. Idealmente, el desarrollador también se involucra en la identificación de problemas y oportunidades.

P: ¿Qué habilidades necesita un developer, además de las técnicas, para realmente aportar valor en un entorno de producto?

R: Las que solemos llamar *soft skills* son cruciales. De forma muy resumida:  

  • Colaboración: saber comunicarse, empatizar y comprender tanto a los clientes como a los colegas.
  • Aprendizaje continuo: entender el negocio, proponer mejores soluciones y adaptarse a nuevos equipos y tecnologías.

P: ¿Cómo ves la importancia de prácticas como DevOps y Continuous Delivery en la creación de productos escalables y sostenibles?

R: Son esenciales para trabajar con incrementos pequeños y lograr una retroalimentación constante. El propósito central de DevOps y la Programación Extrema (XP) es hacer que la Entrega Continua (Continuous Delivery) sea eficiente y viable. Esto nos permite experimentar, validar ideas y adaptarnos rápidamente, siguiendo los principios de Lean Software Development y Lean Startup, popularizados por personas como Mary Poppendieck o Eric Ries.

3. El Costo Basal del Software

P: ¿Podrías explicar brevemente el concepto de "Costo Basal" del software y cómo afecta la capacidad de un equipo para innovar?

R: El Costo Basal del software es el costo continuo que genera una pieza de software simplemente por existir. Esto incluye el mantenimiento, la complejidad añadida al sistema y la carga cognitiva para el equipo. Muchas personas lo comparan con construir un edificio que luego permanece inmutable, pero el software es más como un jardín: crece, cambia y necesita cuidado constante. Mantener funcionalidad irrelevante se convierte en un lastre, limitando la capacidad del equipo para innovar.

P: ¿Qué prácticas clave recomiendas para minimizar el costo basal en un proyecto de software a largo plazo?

R:

  • Aplicar principios de Lean Software Development y Lean Product Development, enfocándose en el impacto máximo con la mínima solución posible (menos código y esfuerzo).
  • Adoptar prácticas técnicas de Extreme Programming (XP), como Outside-in TDD, para escribir solo el código necesario y garantizar alta calidad frente a futuros cambios.

P: ¿Cómo crees que el costo basal influye en las decisiones sobre mantenimiento o eliminación de características?

R: Debería tener un impacto significativo, pero muchas veces se pasa por alto. Las empresas suelen evitar eliminar funcionalidades antiguas por miedo o inercia, incluso cuando ya no aportan valor. Un enfoque de producto consciente evalúa periódicamente cada funcionalidad para justificar su existencia. Si algo no genera retorno ni aprendizaje, es mejor retirarlo para reducir la carga sobre el equipo y el sistema.

4. Consideraciones para Negocio (CEO y Producto) al Comunicarse con Tecnología

P: ¿Qué es lo más importante que debería entender un CEO o líder de producto sobre el desarrollo de software para comunicarse mejor con los equipos técnicos?

R:

  1. Entender el valor de forma integral: no solo se trata de aumentar el retorno, sino de protegerlo y evitar costos innecesarios.
  2. Concebir el software como algo vivo, que evoluciona continuamente, no como un edificio que se construye y queda estático.
  3. Reconocer el Costo Basal del software y cómo gestionarlo estratégicamente.
  4. Valorar al equipo técnico como un aliado clave en las decisiones de producto, especialmente cuando los desarrolladores adoptan una mentalidad de Product Engineers.

P: ¿Qué métricas o indicadores recomendarías revisar en conjunto entre negocio y tecnología para asegurar un desarrollo sostenible y alineado a los objetivos?

R:

  • Métricas Lean como Lead time y Cycle time, además de la cantidad de retrabajo (esfuerzo extra necesario por errores, problemas o decisiones inadecuadas).
  • Tiempo desde la generación de una idea hasta obtener el primer feedback real.
  • Métricas de negocio comprensibles y accesibles para el equipo técnico.
  • Métricas DORA para evaluar la salud del proceso de ingeniería: frecuencia de despliegues, tiempo de recuperación ante fallos, tasa de cambios fallidos, etc.


Conclusiones y Consejo Final

Para líderes de negocio: mi recomendación es que aprendan Lean (Lean Startup, Lean Product Development y Lean Software Development) y adopten sus principios. Esto les permitirá ser más eficientes y sostenibles al buscar valor y gestionar equipos.

Para desarrolladores: recordar siempre que la tecnología es un medio, no un fin. El foco debe estar en el impacto que generamos para el negocio, pero de una manera sostenible a largo plazo: Build the right thing, and build the thing right.


Por último, para todos: la parte difícil siempre son las personas y la colaboración. Invirtamos en mejorar esto, porque es lo que realmente hace la diferencia. ¡Hagamos que cuente!


Wednesday, December 25, 2024

Cloud Mindset: A Different Way of Thinking (tech Pill)

When building software systems in the cloud, we must adopt a different perspective compared to architecting for on-premises data centers or legacy VPS environments. In traditional setups, capacity is fixed, requiring months of lead time to scale, which makes resources both expensive and scarce. The cloud, however, flips these limitations on their head by offering abundant, rapidly provisioned resources—reshaping how we think about infrastructure and application design.

Traditional Infrastructure: Limitations of the Past

  • Fixed capacity: Scaling up in on-premises environments or VPS setups can take months because it involves purchasing and installing new hardware.
  • Scarce resources: Businesses often invest in the bare minimum of hardware to minimize costs, leaving little room for flexibility.
  • High costs: Upfront hardware purchases and ongoing maintenance are expensive, with costs that must be amortized over time.


The Cloud Paradigm: A New Frontier

In the cloud, servers, storage, and databases feel virtually limitless. These resources can be spun up or down in minutes, allowing teams to adapt quickly to changing needs. This flexibility is both cost-effective and efficient. However, to fully leverage these benefits, we need to shift both our mindset and engineering practices.

Key Principles of the Cloud Mindset

1. Treat Resources as Disposable (Owning vs. Renting a Fleet of Cars)

In traditional IT environments, servers are treated like personally owned cars—carefully maintained, upgraded over time, and expected to last for years. In the cloud, the mindset shifts: servers resemble a fleet of rental cars—standardized, easy to replace, and requiring minimal upkeep. This approach highlights the importance of automation and uniform configurations. When a server or infrastructure component fails, it shouldn’t be manually repaired. Instead, it should be automatically replaced.

Recommended Reading: The History of Pets vs. Cattle and How to Use the Analogy Properly (In cloud architecture, servers are treated as commodities, often explained with the “cattle, not pets” analogy.)

2. Design for Failure

Failures are inevitable in cloud platforms, which run on commodity hardware distributed across multiple data centers and regions. Instead of trying to prevent every failure, embrace them by designing for resilience. Use redundancies, fault tolerance, and graceful degradation to ensure your application continues to operate when something breaks.

Key takeaway: Assume failure will happen and architect your system to recover quickly and minimize impact.


3. Define Infrastructure as Code (IaC)

Infrastructure as Code (IaC) tools, like Terraform or AWS CloudFormation, let you define and version-control your infrastructure. This approach makes provisioning fast, consistent, and repeatable. With IaC, you can test, review, and iterate on infrastructure changes the same way you do with application code.

Learn More: Immutable Infrastructure (Tech Pill)

4. Take Advantage of Cloud Elasticity

Elastic scalability is one of the cloud’s biggest advantages. Instead of over-provisioning for occasional traffic spikes, you can scale up during peak loads and scale down when demand decreases. To do this effectively, design your applications for horizontal scaling—adding more instances rather than making existing ones bigger.

5. Pay Per Use: Rent, Don’t Buy

The cloud’s on-demand pricing model means you only pay for what you use. This flexibility allows you to scale resources up or down based on demand, helping you adapt quickly to changing usage patterns. By spinning up resources during heavy loads and deprovisioning them when idle, you keep costs under control without compromising capacity.



The Bigger Picture

Understanding cloud services, APIs, and GUIs is just the tip of the iceberg when it comes to cloud adoption. The true transformation lies in embracing a fundamental shift in engineering culture and design philosophy. It’s about accepting new assumptions that define how we build and operate in the cloud:

  • Resources are limitless: Stop hoarding and start focusing on how to use resources effectively.
  • Failure is inevitable: Design for resilience from the outset instead of trying to avoid every possible failure.
  • Speed matters: Leverage automation, scripting, and repeatable processes to enable rapid experimentation and iteration.

A New Engineering Challenge

“Construct a highly agile and highly available service from ephemeral and assumed broken components.” Adrian Cockcroft

This challenge captures the essence of building in the cloud. Unlike traditional data centers, the cloud requires us to design for an environment where components are temporary and failure is expected. Adopting a true “cloud mindset” means rethinking old habits and fully leveraging the cloud’s unique characteristics to deliver robust, scalable, and cost-effective solutions.

Key Takeaways

In summary, building for the cloud means embracing four key principles:

  • Embrace disposability: Treat infrastructure as temporary and replaceable.
  • Design for failure: Build resilience into your system instead of trying to prevent every failure.
  • Automate everything: Use tools and processes that allow for speed and consistency.
  • Pay only for what you use: Take advantage of the cloud’s cost-efficiency by scaling dynamically.

By adopting these principles, you’ll create services that are highly available, scalable, and agile—perfectly aligned with the demands of modern business.

Related:

Thursday, December 19, 2024

Lean Software Development: Decide as Late as Possible

Translated from the original article in Spanish https://www.eferro.net/2024/06/lean-software-development-decidir-lo.html

Lean Software Development starts from the premise that software and digital product development is a constant learning exercise (see Amplify Learning). With this premise, it is clear that the more information we have before making a decision, the better its quality will be. Therefore, deciding as late as possible allows us to be more effective. At the same time, there is a cost (or risk) associated with not making a decision, which increases over time. For this reason, we should aim for the "last responsible moment", the optimal point for making a decision with the most information possible, without the cost of delay outweighing the potential benefit of obtaining more information by postponing it.


Advantages of Delaying Decisions and Keeping Options Open

Postponing decisions is a fundamental strategy in lean software development. Although it is not always easy and requires practice, this tactic enables the creation of sustainable and easy-to-evolve products. One of the main advantages of making decisions later is that it provides a better understanding of both the business and technology, which in turn facilitates more informed and accurate decision-making.

Additionally, delaying decisions leads to simpler and smaller solutions, reducing the effort required to implement them and avoiding unnecessary overhead. By keeping our options open, we focus only on what provides real value in the present, avoiding over-engineering and unnecessary work. This flexibility allows us to react quickly and effectively to any change without fear, which is crucial in the dynamic environment of software development.

Some specific advantages of delaying decisions include:

  • Less work and waste: Implementing only what is necessary at the moment reduces total work and waste.
  • Reduced effort in rework: If changes are needed, less effort is required because over-engineering has been avoided.
  • Greater flexibility and adaptability: Keeping options open enables us to adapt quickly to new requirements or changes in the environment.

A well-designed architecture allows delaying important decisions without compromising the product's quality or flexibility. This not only enables us to make better-informed decisions but also facilitates the creation of good architecture, which in turn allows the postponement of other critical decisions in the future. In short, this strategy allows us to move forward with less burden and greater agility, enhancing our ability to deliver continuous value.

Decision-Making

My teams are empowered, end-to-end teams that have full authority over how to solve problems and, in many cases, even over which problems to solve. In such teams, numerous decisions of all kinds are made daily—decisions about the problem, potential solutions, implementation of those solutions, and the prioritization of where to invest (e.g., solving an issue, reducing uncertainty about a technical or product decision, determining next increments, etc.).

For example, when starting a new product or feature, the level of uncertainty is usually very high. In these cases, we prioritize reducing that uncertainty by breaking down critical decisions and assumptions into smaller parts and investing in product increments to obtain feedback, thereby reducing the uncertainty.

As we receive feedback, we learn more and adjust our assumptions/hypotheses accordingly. This iterative process helps us move from great uncertainty to greater clarity and confidence.

By continuously validating assumptions, we reduce risks and make more informed decisions.



These decisions occur continuously at all levels. It is part of the nature of our profession.

If we had to classify the types of decisions to be made, we could do so as follows:

  • Decisions about the problem and opportunity to explore
    • What: Where we should invest and why.
    • How: What solution or implementation strategy we believe is appropriate for the selected case.
  • Decisions about technology and implementation
    • At the architecture and technologies level.
    • At the design level of the solution at various levels.

It is important to note that decisions are never independent; they must always consider the context and knowledge of the team, as well as the existing system or product. It is as if we see problems and opportunities as the difference in knowledge or implementation between what we know or have and what we want to achieve.

In other words, it is always about making small increments in the direction and with the objective we need. Sometimes, it is about delivering value to the user; other times, it is about reducing the uncertainty of a decision we need to make. Always, it is an increment in the software (product/system) or in the knowledge the team has.



Breaking Down Decisions to Address Them Individually

One of the main ways to increase our chances of postponing decisions is to break down large decisions into small decisions and clearly separate those that are easily reversible from those that are irreversible (or very difficult to reverse).

Therefore, working to decide as late as possible involves doing so in very small steps or increments, which is fully aligned with the Lean mindset (working in small batches).

This process of breaking down is a continuous practice that allows the team to address everything from where to invest in the next quarter to what to do in the next few hours. We should see it as a systematic way of thinking that is recurrently applied at all levels.

Conclusions

Deciding as late as possible is a key strategy in Lean Software Development that maximizes the quality of decisions by leveraging the most information available. This practice contributes to creating more sustainable, flexible, and adaptable products.

The main advantages are:

  • Reduction of waste: Focusing only on what is necessary minimizes redundant work and avoids over-engineering.
  • Greater flexibility: Keeping options open allows for rapid adaptation to environmental or requirement changes.
  • More informed decisions: Postponing decisions until the last responsible moment results in more accurate and effective decisions.
  • Increased adaptability: Facilitates the implementation of simple, small solutions, reducing the effort needed for changes and improvements.

It is essential that empowered teams adopt this mindset and become accustomed to breaking decisions into manageable steps. By doing so, they can incrementally reduce uncertainty, continuously validate their assumptions, and adjust strategies based on received feedback. This iterative approach reduces risks and strengthens the team's ability to deliver continuous value.

In upcoming articles, we will explore strategies and concrete examples of how to effectively postpone decisions, providing practical tools to implement this philosophy in your software development projects.

Related Resources

Sunday, December 15, 2024

Good talks/podcasts (Dec 2024 I)

 


These are the best podcasts/talks I've seen/listened to recently:
  • Adam Ralph - Finding your service boundaries — a practical guide - SCBCN 24 (Adam Ralph) [Architecture, Architecture patterns, Microservices] [Duration: 00:48] (⭐⭐⭐⭐⭐) This presentation is about identifying service boundaries in software architecture to avoid coupling and ending up with a "big ball of mud", even when using microservices. I recommend this talk because it provides practical advice on how to define services as technical authorities for specific business capabilities, leading to more maintainable and scalable systems.
  • AWS re:Invent 2024 - Dr. Werner Vogels Keynote (Werner Vogels) [Architecture, Engineering Culture, simplicity] [Duration: 01:50] This presentation explores the concept of "simplexity" - building and operating complex systems safely and simply, using lessons learned from 20 years of evolution at Amazon Web Services (AWS). The speaker emphasizes the importance of designing evolvable systems from the beginning and outlines six key lessons for managing complexity, including breaking down systems into smaller units, aligning organizations to architecture, and automating tasks that don't require human judgment. Numerous examples from AWS, such as the evolution of Amazon S3, CloudWatch, and Route 53, illustrate the practical application of these principles.
  • How to Deliver Quality Software Against All Odds GOTO 2024 (Dan North) [Agile, Continuous Delivery, Engineering Culture, XP] [Duration: 00:52] (⭐⭐⭐⭐⭐) This podcast features Daniel Terhorst-North, a prominent figure in the software development world, reflecting on 20 years of industry changes and sharing his insights on topics ranging from Agile and DevOps to product management and organizational flow. Drawing on his experiences at Thoughtworks and beyond, Terhorst-North highlights the importance of connecting business needs with technical implementation and emphasizes the value of building evolvable systems with "simplexity" in mind.
  • Microservices Retrospective – What We Learned (and Didn’t Learn) from Netflix (Adrian Cockcroft) [Architecture, Architecture patterns, Cloud, Microservices] [Duration: 00:55] This presentation offers a retrospective analysis of the speaker's experience implementing microservices at Netflix from 2007-2013, examining both the successes and the lessons learned along the way. The speaker discusses key aspects of Netflix's innovative approach, including their "extreme and agile" culture, early adoption of cloud technologies like AWS and Cassandra, and focus on developer freedom and responsibility. The presentation also highlights specific technical patterns and practices developed at Netflix, such as the use of service access libraries, lightweight serializable objects, and chaos engineering.
  • Team Topologies, Software Architecture & Complexity • James Lewis • GOTO 2022 (James Lewis) [Architecture, Engineering Culture, Microservices, team topologies] [Duration: 00:38] This presentation explores the intersection of team topologies, software architecture, and complexity science, arguing that successful organizational design and software development hinges on optimizing for flow and value delivery. The speaker, drawing on his experience with the evolution of microservices, advocates for embracing decentralization, limiting hierarchy, and leveraging social network structures to foster innovation and agility in growing organizations.
Reminder: All of these talks are interesting, even just listening to them.

Related:

Thursday, December 12, 2024

Lean Software Development: Amplify Learning

Translated from the original article in Spanish https://www.eferro.net/2024/05/amplificar-el-aprendizaje.html

In this new article in the series on Lean Software Development, we will focus on the nature of software development and the importance of optimizing our process for continuous learning.

The Nature of Software Development

Regardless of the classic mistake we often make in the industry by comparing software development to building construction or manufacturing processes (see Construction engineering is NOT a good metaphor for software), the truth is that developing a software product is a process of adapting the solution to the user’s changing needs (or to our hypotheses about those needs). That is, it is the process of continuously evolving the software to adapt to those needs.

This evolution is continuous not only because our client’s needs evolve (strategy, business rules, processes) but also because the environment in which we operate changes constantly (SaaS, competitors, AI, evolution of devices). This evolution is part of the intrinsic nature of software since its great advantage is precisely the speed with which it can evolve and adapt. Software that is not evolving is dead software (either because no one is using it or because it has become obsolete).

design by http://www.freepik.com/

Unlike production and manufacturing processes, where quality is conformity to established requirements, quality in software is meeting the needs of our clients. These needs change continuously, so the software must change in turn.

Therefore, we can see that at the heart of the nature of software development lies continuous and profound learning about our clients' (changing) problem or need and about the suitability of our solution to solve that problem or need.

Lean Software Development recognizes this nature of software development and considers it necessary to amplify learning to optimize the product development process.

"A mature organization focuses on learning effectively and empowers the people who do the work to make decisions." Mary Poppendieck

Amplify Learning

Learning is the cornerstone of effective software development. In the context of Lean Software Development, this principle is elevated as a fundamental guide for continuous improvement. Recognizing that knowledge is dynamic and that learning is an ongoing process is crucial for progress in an agile development environment.

However, this learning cannot be limited to specific people or roles; it must extend to the entire team, as we want the whole team to contribute, and we have already seen that learning is part of the nature of software development.

Amplifying learning involves not only understanding what the client wants but also discerning what they do not need. This discernment is critical, as building unnecessary features or functionalities is the biggest waste in software development. Therefore, the learning process must focus on the constant clarification of the client's real needs, avoiding waste and optimizing value delivery.

"The biggest cause of failure in software-intensive systems is not technical failure; it’s building the wrong thing." Mary Poppendieck

In summary, Lean Software Development recommends:

  • Recognizing continuous and constant learning as the fundamental bottleneck in software product development.
  • Learning from the client’s needs and problems, also identifying what they do not need.
  • Optimizing continuous learning by the entire team.

Lean Software Development suggests the following tools to enhance learning:

  • Feedback loops
  • Iterations
  • Synchronization and integration
  • Set-Based Development

Grounding the Amplification of Learning

In my 15 years of experience working with various teams, I have always considered learning a fundamental part of the nature of software development. My approach has been to foster the continuous and amplified learning promoted by Lean Software Development.

Below, I outline the tools I have used to amplify learning:

Empowered Product Teams

These teams, based on business needs and strategies, have the ability to decide which problem to solve and how to solve it. They are teams with a true product mindset, composed of Product Engineers who are not only interested in the client’s problem or need but also seek to understand it deeply and propose the best solutions.

As John Cutler aptly describes, these are known as Product Teams.

https://amplitude.com/blog/journey-to-product-teams-infographic

These Product Teams are responsible for understanding and learning about the client’s problems, using that learning to propose the best solutions. In these teams, learning is key, and they employ product discovery practices. In our specific case, team members take turns conducting user interviews, facilitating co-creation sessions, or providing support. All these sessions provide us with insights that are shared with the rest of the team, enabling us to make decisions about the next steps.

Although I am aware of more advanced techniques than those I’ve mentioned for conducting product discovery, I’ve never put them into practice. We’ve been able to make a significant impact and achieve valuable insights without using sophisticated product discovery practices, thanks to the types of products we’ve been involved with (less visual products, sometimes internal, or aimed at technical profiles).

Feedback Loops

We use Extreme Programming (XP) as a foundation, focusing on creating the smallest and most frequent feedback loops to optimize the development process:

  • Constant communication: At the mob or pair level (seconds).
  • Test-Driven Development (TDD): Short test and development cycles (minutes).
  • Acceptance tests: Rapid evaluation of functionalities (minutes).
  • Frequent deployments: Regular implementation of improvements (hours).
  • Daily planning: Reviewing and adjusting daily objectives (1 day).
http://www.extremeprogramming.org/introduction.html

Additionally, we complement these XP feedback loops with Continuous Deployment (CD) techniques, enhancing our ability to integrate and validate changes almost instantly.

"Extreme Programming is a team style of software development... enabling that by increasing feedback loops at every possible level... so if you get great feedback then you don't have to make good decisions because you can afford to just make a decision and you'll find out." Kent Beck

These feedback loops also occur through client feedback, where we can assess whether our hypotheses about client needs and the proposed solution are achieving the desired impact.

Iterations

Although we have moved away from the traditional approach of fixed iterations, we continue optimizing continuous delivery flow. We frequently iterate on different parts of the product, always focusing on the most critical areas at any given time. For us, no feature or functionality is ever definitively complete; they are constantly evolving, and it is common to revisit previously developed elements as needed. (See https://productdeveloper.net/iterative-incremental-development/ )

Synchronization and Integration

We use Continuous Integration and Continuous Delivery practices with a Trunk-Based Development approach. This ensures that the entire team maintains a complete and integrated view of the product daily, avoiding isolated code branches that persist for days or weeks. There is only one active version on the main branch, which minimizes waste, prevents conflicts, and ensures a shared vision.

This way of working allows the entire team to share the same vision of the code (the solution) at all times and prevents an individual or part of the team from isolating themselves with a different vision (and knowledge) for days while working on a functionality branch.

Set-Based Development Design

In Lean Software Development, the term Set-Based Development designates a methodology that prioritizes keeping multiple design options open throughout the development process. This approach enables the collection of as much information as possible, not only about a specific design element but also about how all elements integrate. Contrary to the tradition of making early design decisions based on the information available at that moment, set-based development favors deferring design decisions until more complete information is available, which is crucial for the development of high-quality, flexible software.

This method is based on the reality that, in software development, interactions between different components cannot be predicted with complete certainty until they are implemented and tested. Therefore, keeping design options open and avoiding definitive decisions until more data is available results in a more effective approach for managing complex, high-quality software projects.

In my teams, the practice of keeping design options open and postponing decisions until the maximum information is obtained is an obsession. I have even created a specific workshop to practice this methodology (see Lean Workshop: Postponing Decisions). The key to working with open options lies in emphasizing simplicity (XP), fostering evolutionary design, and proceeding in very small steps that allow us to make decisions at the last responsible moment. I will delve further into this topic in a full article dedicated to it in the series.

“Simplicity, or the art of maximizing the amount of work not done, is essential.”  Principle from the Agile Manifesto for Software Development

Other Useful Techniques to Amplify Learning

In our day-to-day work, we apply strategies like mob programming or pair programming with frequent pairing rotations. This facilitates the rapid dissemination of knowledge within the team and prevents the formation of information silos.

We also regularly use Extreme Programming Spikes, which are timeboxed tasks dedicated exclusively to exploring and learning about specific aspects we need to master, such as a new technique, library, or technology.

Another technique that has always worked for me to improve and amplify learning is introducing blameless postmortems, applied both to production incidents and to retrospective reviews of initiatives. 

Conclusions

In summary, our approach prioritizes learning as a fundamental element, working with frequent feedback cycles, and making decisions based on what we continuously learn. This approach helps us adapt quickly and constantly improve our effectiveness and efficiency in product development.


References and Related Links

Sunday, December 08, 2024

Using Blameless Incident Management to Change Team Culture

I've always worked at product companies, creating or scaling teams. In these product companies, we work remotely, at least partially. In my experience, introducing Agile culture to a technical team means introducing DevOps and Agile software development. However, I see Agile culture as more than just tools and processes; it is a culture of collaboration, continuous improvement, continuous learning, a focus on technical excellence, and transparency. Let's explore how we can manage incidents in a way that aligns with this Agile culture.

Production Incidents

We refer to "production incidents" as anything affecting our clients or that we suspect might affect them. These incidents can include things like machine failures, unexpected metrics, or a client reporting an error.

High-Performing Teams

Let's take a quick look at what makes high-performing teams so effective. Google's research on high-performing teams shows that individual talent is not the most important factor. Instead, the key to high performance is the quality of the interactions within a team. The most important factor for high-performing teams is psychological safety. Team members need to feel safe taking risks without fear of ridicule or failure. This psychological safety is essential for fostering a culture of learning and improvement.


Traditional Incident Management vs. Agile Incident Management and Psychological Safety

Unfortunately, the traditional approach to incident management often lacks this crucial element of psychological safety. Incident management often falls solely on the operations team, creating a siloed and stressful environment. Under pressure, teams may resort to blame and scapegoating, leading to a culture of fear and hiding problems. This approach is not conducive to learning and improvement and can ultimately lead to recurring issues. Instead of resorting to blame, we can adopt an Agile approach to incident management, focusing on collaboration, learning, and continuous improvement. This approach reduces fear, avoids a hero culture, and encourages transparency.

In my experience at TheMotion, Nextail, and ClarityAI introducing blameless incident management practices has served as a lever to shift the team's culture towards one of continuous learning. It has helped us overcome the fear of making problems visible, fostered collaboration, and empowered us to address issues at their root causes. This resonates with one of the core principles of Agile incident management: “Hard on systems. Soft on people." We prioritize understanding how the system contributed to the error, rather than pointing fingers at individuals. This creates a safer space for open communication and learning.

The impact of this cultural shift at TheMotion was so significant that team members who moved to other companies have begun implementing these ideas in their new teams.

Here's how our process works:

  • Stay Calm and Don't Panic: When an incident occurs, it's important to stay calm and avoid panicking. Our process is designed to help us avoid this. When we interview developer candidates, we ask them about a time when they made a mistake in production. We evaluate not only their technical skills but also their ability to remain calm under pressure and avoid panicking. This helps ensure that our team can handle incidents effectively without succumbing to fear or stress.
  • Assign an Incident Commander: We automatically assign an incident commander to take charge of the situation. The incident commander's responsibilities include:
    • Creating a "War Room" for collaboration.
    • Creating a blameless incident report to document the incident and focus on learning.
    • Notifying the appropriate stakeholders about the incident.
    • Recruiting and coordinating a team to resolve the incident.
  • Focus on Service Recovery: The team's primary goal is to recover the service as quickly as possible. This might involve implementing a temporary fix, disabling a functionality, or communicating with clients. The key is to stabilize the system and minimize the impact on users.
  • Investigate the Root Cause: Once the service is restored, the team investigates the root cause of the incident. This investigation follows a process of:
    • Hypothesis generation.
    • Validation.
    • Documentation.
    • Repetition.
  • Define Corrective and Preventive Actions: Based on the investigation, the team defines corrective and preventive actions to reduce the mean time to recovery (MTTR) and the blast radius of future incidents. These actions aim to improve the system's resilience and prevent similar incidents from happening again. We prioritize tasks related to improving the system and resolving recurring issues. We ensure these corrective actions are integrated into our workflow and addressed with high priority to maintain team motivation and demonstrate that system improvement is a top priority for the entire company.
  • Integrate Actions into the Workflow: The corrective and preventive actions are then prioritized and integrated into our normal workflow, ensuring they are addressed promptly.

The Importance of Blameless Incident Reports

Throughout the entire process, we maintain a blameless approach, emphasizing learning and improvement over assigning blame. We use blameless incident reports, which are:

  • Collaborative: The incident commander creates a shared Google Doc where everyone involved can contribute in real-time.
  • Transparent: We make incident reports public to the entire company as soon as we start detecting an issue. This transparency fosters trust and allows anyone to stay informed about the incident's progress.
  • Detailed: Our incident report template includes a summary of the incident, a timeline, root causes, corrective and preventive actions, and lessons learned.

Facilitating Change

To successfully introduce this approach, it's essential to:

  • Focus on Systems and Habits: Instead of blaming individuals, we concentrate on improving our systems, processes, and habits to prevent future incidents.
  • Lead by Example: By actively participating in the process and demonstrating a blameless approach, we can encourage others to adopt this mindset.
  • Show Vulnerability: Leaders should be willing to admit their mistakes (I have a few 😅) and share their experiences, creating a safe space for others to do the same.
  • Prioritize Improvement: It's crucial to ensure corrective and preventive actions are prioritized and not overshadowed by other business priorities.
  • Reinforce Learnings: We should highlight key learnings from incident reports and share them with the team to promote continuous learning.

Benefits of Agile Incident Management

Embracing Agile incident management can lead to numerous benefits, including:

  • Increased Trust: Transparency and collaboration build trust among team members and between the team and the rest of the company.
  • Enhanced Psychological Safety: A blameless approach creates a psychologically safe environment where people feel comfortable taking risks and learning from mistakes.
  • Improved Resilience: By systematically addressing incidents, we can continually improve our systems and make them more resilient.
  • Focus on Continuous Improvement: Incident management becomes an integral part of our continuous improvement process, leading to a more robust and reliable system.
  • Greater Transparency: Open communication about incidents and their resolution fosters a culture of transparency and accountability.
  • Enhanced Professionalism: Our commitment to learning and improvement demonstrates professionalism to our clients and stakeholders.

Conclusion

By adopting an Agile approach to incident management, we can transform our team's culture and create a more resilient and reliable system. By focusing on collaboration, learning, and continuous improvement, we can turn incidents into valuable opportunities for growth and development. Remember, incidents are inevitable, but how we respond to them is what truly matters. Let's embrace a culture of learning and create a system that can withstand the inevitable challenges of production.

Notes


Friday, December 06, 2024

Focus on Verbs, Not Names: A Strategy for Better System Design

In my experience, the key to deeply understanding a system or product lies in focusing on behaviors—the actions, flows, and events that drive its operation. Prioritize identifying verbs over names. Here’s why this approach works and how it can transform your design process.

Start with Behaviors

Shift your analysis from “What is this thing?” to “What does it do?”. Focus your research and conversations on:

  • Identifying actions and business flows.
  • Understanding dependencies and concurrency.
    • What depends on what?
    • Which actions can happen in parallel?
    • What triggers or informs each behavior?

When you analyze behaviors, you uncover the dynamic interactions users have with your system. This focus naturally aligns with designing systems as collections of small, independent pieces that encapsulate state and communicate through messages—perfect for paradigms like OOP, Actors, and Microservices.

The Pitfall of Focusing on Names

Many teams fall into the trap of identifying entities (names) first, which often results in anemic, static models disconnected from real-world dynamics. This approach, while intuitive, neglects the rich context of flows, dependencies, and rules. Misunderstanding Object-Oriented Analysis (OOA) in this way often leads to systems that lack expressiveness and scalability.

Behaviors as the Foundation of Value

Remember, customers don’t value software for its own sake. Software is a liability, not an asset. The true asset lies in the actions and outcomes your system enables. Identifying behaviors first ensures your design delivers meaningful value to the customer by focusing on what they actually need: actions, not abstractions.

Scalability Through Behaviors

Focusing on behaviors reveals the concurrent nature of the real world. Systems that prioritize names struggle to address concurrency, parallelism, and scalability. By contrast, analyzing actions and flows allows you to design systems that are naturally reactive and distributed. At higher levels, this approach helps define bounded contexts, domain events, and microservices. At lower levels, it aids in designing concurrent and scalable services.

Event Storming: A Behavioral Lens

Techniques like Event Storming are powerful tools for identifying domain events, dependencies, and key behaviors. They bring the behavioral focus to life, helping teams collaboratively uncover what drives their system and how it should respond.



Conclusion

Identifying entities (names) has its place, but behaviors (verbs) are more critical. A system’s essence lies in what it does, not what it’s called. Adopting a behavior-first approach ensures you design systems that are adaptive, scalable, and valuable to customers.

In the end, this mindset reflects a simple truth: development is always iterative. By continuously refining our understanding of behaviors, we build systems that evolve gracefully with the needs of their users.

Notes:

Saturday, November 30, 2024

Evolving Solutions for Maximum Impact

In Lean Software Development, the way we approach solutions defines our ability to deliver impactful results efficiently. Instead of breaking down a predefined solution into small increments, Lean encourages growing a solution incrementally in a guided direction. This is achieved through continuous feedback, ensuring the solution evolves dynamically rather than being constrained by initial assumptions.

Why Feedback Matters

Feedback is the cornerstone of this iterative approach. It often pushes us beyond the boundaries of the original solution, steering us toward unexpected yet more effective outcomes. By letting feedback guide our iterations, we move closer to what truly delivers value.

The Lean Approach to Evolution

Instead of starting with a "decided" solution and slicing it into parts:

We consider a better approach, to begin with a minimal idea, iterate based on feedback, and stop when the solution achieves the desired impact:


Focus on impact, not completion: The solution is "done" when it provides the needed results, not when every imagined feature is implemented.

This mindset shifts the focus from building a large, potentially wasteful solution to creating just what is necessary.

Benefits of Lean Evolution

  • Through this approach, teams can achieve:
  • Faster impact: Reaching results quicker by avoiding overengineering.
  • Minimal code: Writing only what is needed reduces waste.
  • Lower basal cost: Simplified solutions are easier and cheaper to maintain.


Closing Thoughts

Lean Software Development reminds us that less is more. By evolving solutions incrementally and guided by feedback, we minimize waste and maximize impact. This philosophy emphasizes efficiency, ensuring that every line of code contributes to value.

Good talks/podcasts (Nov 2024 II)

These are the best podcasts/talks I've seen/listened to recently:
  • YOW! 2019 Evolutionary Design Animated (Part1) (James Shore) [Agile, Engineering Culture, Evolutionary Design, Software Design, XP] [Duration: 00:24] (⭐⭐⭐⭐⭐) Modern software development welcomes changing requirements, even late in the process, but how can we write our software so that those changes don’t create a mess? Evolutionary design is the key. It’s a technique that emerges from Extreme Programming, the method that brought us test-driven development, merciless refactoring, and continuous integration. James Shore first encountered Extreme Programming and evolutionary design nearly 20 years ago. Initially skeptical, he’s explored its boundaries ever since. In this session, James will share what he’s learned through in-depth animations of real software projects. You’ll see how designs evolve over time and you’ll learn how and when to use evolutionary design for your own projects. Part 2: YOW! 2019 Evolutionary Design Animated (Part2)
  • 5 Reasons Your Automated Tests Fail (Dave Farley) [CI, Continuous Delivery, testing] [Duration: 00:21] This video explores the five reasons why automated tests fail, including environment, test data, version control, resource use, and system behavior. It then explains how to fix these failures by controlling the environment, isolating test data, using version control, addressing resource constraints, and designing deterministic systems
  • Product Agility Podcast: 9 Million Users from a Full Stack Product Legend An Interview with Gojko Adzic (Gojko Adzic) [Inspirational, Product, Product Discovery] [Duration: 00:50] (⭐⭐⭐⭐⭐)  This interview with Gojko Adzic, the creator of Narakeet and author of "Lizard Optimization," explores his journey in building products that reach millions of users. He shares insights on "lizard optimization," a method of leveraging unexpected user behavior for product innovation and growth, as well as the importance of aligning development with the five stages of product growth.
  • Observability & Testing in Production - with Charity Majors (Charity Majors, Luca Rossi) [Continuous Delivery, Engineering Culture, Observability, Testing in production] [Duration: 00:53] This interview with Charity Majors, CTO of Honeycomb, explores the concept of observability, particularly focusing on the differences between observability 1.0 and 2.0 and the advantages of the latter for modern software development. The conversation also touches upon the importance of testing in production, implementing an effective continuous delivery process, and embracing the changing role of software engineers in the age of AI.
  • Developer Productivity Engineering: What's in it for me? (Trisha Gee) [Developer Productivity, Devex, testing] [Duration: 01:08] This video features Trisha Gee, presenting on Developer Productivity Engineering (DPE). She explains the principles and practices of DPE and emphasizes how it can improve developer experience and efficiency.
  • Shaped by demand: the power of fluid teams (Dan North) [Agile, Lean Software Development, Management, Teams] [Duration: 00:32] Daniel Terhorst-North challenges the notion of stable, long-lived teams and presents an alternative approach called demand-led planning. He argues that by structuring teams around the demand for specific types of work, including feature delivery, discovery, Kaizen, failure demand, and business as usual, organizations can achieve greater flexibility and responsiveness to changing needs.
  • Why Scaling Agile Doesn't Work - GOTO 2015 (Jez Humble) [Agile, Continuous Delivery, Lean Software Development] [Duration: 00:51] (⭐⭐⭐⭐⭐) Jez Humble examines the common pitfalls of scaling Agile methodologies and presents alternative strategies for achieving organizational agility. He argues that simply implementing Agile practices without addressing underlying systemic issues, such as lengthy feedback loops and inefficient decision-making processes, will not lead to significant improvements. Instead, he proposes that organizations focus on creating rapid feedback loops, reducing batch sizes, and adopting an experimental approach to product development and process improvement, emphasizing value over cost and estimation.
Reminder: All of these talks are interesting, even just listening to them.

Related:

Sunday, November 24, 2024

Good talks/podcasts (Nov 2024 I)

These are the best podcasts/talks I've seen/listened to recently:
  • YOW! 2019 Evolutionary Design Animated (Part1) (James Shore) [Agile, Engineering Culture, Evolutionary Design, Software Design, XP] [Duration: 00:24] (⭐⭐⭐⭐⭐) Modern software development welcomes changing requirements, even late in the process, but how can we write our software so that those changes don’t create a mess? Evolutionary design is the key. It’s a technique that emerges from Extreme Programming, the method that brought us test-driven development, merciless refactoring, and continuous integration. James Shore first encountered Extreme Programming and evolutionary design nearly 20 years ago. Initially skeptical, he’s explored its boundaries ever since. In this session, James will share what he’s learned through in-depth animations of real software projects. You’ll see how designs evolve over time and you’ll learn how and when to use evolutionary design for your own projects.
  • Working Effectively with Legacy Code • Michael Feathers & Christian Clausen • GOTO 2023 (Michael Feathers, Christian Clausen) [AI, Legacy code, Refactoring, testing] [Duration: 00:45] This interview with Michael Feathers, author of "Working Effectively with Legacy Code," explores practical strategies for managing and improving large, untested codebases, including techniques for testing, refactoring, and understanding software change mechanics. Feathers and interviewer Christian Clausen also discuss the impact of AI on code quality, the challenges of advocating for testing in organizations, and the importance of prioritizing efforts based on code value and criticality
  • Living Domain Model: Continuous Refactoring to Accelerate Delivery (Younes Zeriahi) [Legacy code, Refactoring, Technical Practices] [Duration: 00:47] (⭐⭐⭐⭐⭐) Useful talk for anyway working with legacy complex systems. Younes Zeriahi shares practical examples and techniques for refactoring code in a way that accelerates delivery and improves the overall design, using concepts like Mikado, expand and contract, and Chesterton's Fence. He also highlights the importance of a strong test suite and a deep understanding of the domain for effective refactoring.
  • Product management theater (Marty Cagan, Lenny Rachitsky) [Product, Product Discovery, Product Leadership] [Duration: 01:25] This podcast episode features a conversation with Marty Cagan about the state of product management and the differences between effective and ineffective practices. Cagan discusses the common problem of "product management theater," where individuals hold product management titles but lack the necessary skills and operate within feature teams rather than empowered product teams. The discussion emphasizes the importance of focusing on outcomes, understanding customer needs, and embracing experimentation to build successful products.
  • The Logic of Flow: Some Indispensable Concepts (Donald Reinertsen) [Lean Product Management, Product] [Duration: 00:33] (⭐⭐⭐⭐⭐) This talk explores key concepts and mathematical principles behind achieving flow in processes, like product development, drawing parallels to flow dynamics in traffic and internet systems. Don Reinertsen explains the economics of queuing, batch size reduction, and fast feedback loops, highlighting their impact on cycle time and overall process efficiency.
  • If Russ Ackoff had given a TED Talk... (Beyond continuous improvement) (Russ Ackoff) [Quality, Resilience, Systems Thinking] [Duration: 00:12] This talk explores how to avoid common pitfalls in quality improvement programs by applying systems thinking principles, arguing that focusing on improving individual parts in isolation can be detrimental to the overall system's performance.
  • Small Batches podcast: The Mental Model (Adam Hawkins) [Lean, Lean Software Development] [Duration: 00:07] This episode explores when to apply the lean mental model in software development, emphasizing its effectiveness for navigating situations with high uncertainty and the need for rapid learning.
Reminder: All of these talks are interesting, even just listening to them.

Related:

Monday, November 18, 2024

Eliminating Waste in Software Development

Translated from the original article in Spanish https://www.eferro.net/2024/04/eliminar-desperdicios-en-el-desarrollo.html

In our first post, we explored the origins and foundational principles of Lean Software Development. In the second, we introduced certain basic concepts that I’ll use throughout this series. Now, we’ll focus on the first of these principles: Eliminating waste. It’s essential to understand and reduce activities that don’t add value to optimize our development processes and increase the value we deliver to our customers.  

In this article, I’ll describe examples and practices I’ve applied in various agile teams over the years. It’s important to note that the practices and examples we mention are specific to our context, such as product development and empowered teams, and they often reinforce each other. Therefore, implementing them in isolation is not advisable. For example, starting continuous deployments without an adequate automated testing system could be more harmful than beneficial.  

Adapting Lean Manufacturing Principles to Software Development

In the original Lean Manufacturing, seven main types of waste were identified: Inventory, Extra Processing, Overproduction, Transportation, Waiting, Motion, and Defects. Mary and Tom Poppendieck, based on their extensive knowledge of Lean and software development, adapted these concepts to make them more relevant in this new context. For instance, they redefined "Inventory" as "Partially Done Work", "Extra Processing" as "Extra Process", "Overproduction" as "Extra Features", and "Transportation" as "Task Switching". They considered that the remaining types of waste retained their direct applicability to software development.

Identifying and Eliminating Waste

To eliminate waste, the first step is to train the team to identify what constitutes waste. In this regard, it’s important to:  

  • Analyze from the Customer’s Perspective: We must always ask ourselves whether an activity adds value to the customer/user and if we can eliminate it without affecting their perception.  
  • Foster a Culture of Constructive Criticism: Being critical of our actions and methods allows the team to periodically analyze its way of working to identify and eliminate waste.  
  • Consider Long-Term Impact: It’s vital to distinguish between what might seem like waste in the short term but isn’t necessarily so in the medium or long term, always keeping customer/user satisfaction in mind.  

It’s essential to classify the identified waste into two categories: those necessary for specific reasons, such as regulations or laws, and those we can completely or partially eliminate without compromising customer/user satisfaction. For the first category, we should focus on understanding the reason behind these limitations to minimize waste as much as possible. For the second, it’s crucial to take a more decisive approach, systematically working to eliminate them.  

Partially Done Work

In Lean Manufacturing, the inventory of partially completed parts is physically visible and requires organization and, at times, maintenance. In contrast, in our context, "inventory" (code, knowledge, information, analysis, etc.) is not as visible but is just as costly.

The real value for the customer or user only arises when they access the new functionality or change. Often, even at that moment, the value remains uncertain until we receive feedback. Therefore, since true value is only realized at the final stage, it is crucial to shorten the time from idea conception to delivery. In other words, we must strive to reduce work in progress and decrease lead time. As Dan North puts it, the goal is to minimize the gap between the initial idea and the user's "thank you."

These are the practices we use to eliminate partially done work:

  • Analyze/Prepare backlog work on demand: We maintain a backlog for no more than a month, and if it grows, we eliminate initiatives. If it is important, it will resurface.
  • Radical vertical slicing: Both at the product and technical levels, enabling us to deploy increments within a few hours or a day. This, of course, requires Continuous Delivery (CD).
  • Trunk-Based Development: We avoid partially done work in feature branches and the merge-related problems.
  • End-to-end management by our team: We handle deployments, validate quality, monitor the product, etc., avoiding wait times for other teams or specialists.
  • Immediate deployment of improvements: Both user improvements, which provide business feedback, and technical ones, which provide system feedback.

As shown in the team's Cumulative Flow Diagram, we maintain a minimal backlog and manage related tasks only when necessary and in the smallest possible quantity.


Additional Process: Simplification Toward Value

Aligning with the Agile Manifesto, Lean Software Development promotes measuring progress primarily by the software value delivered to the customer. In this framework, any element—such as excessive documentation, redundant processes, unnecessary meetings, or approvals—that does not directly contribute to value for the customer/user should be assessed for elimination.

Since adopting Agile principles, I have collaborated with teams to streamline our processes, discarding anything that does not generate value.

From that experience, I highlight two significant changes:

  • Transition from Scrum to Kanban: Scrum initially helped us, but we evolved toward Kanban to focus on continuous workflow. This reduced extensive meetings and planning, favoring shorter, more focused sessions, adapting to Lean’s Just-In-Time model.
  • Elimination of Estimates: We prioritize small, continuous changes, allowing us to forgo traditional estimates. We still make high-level estimates for large initiatives but with a focus on minimizing risk and unnecessary time investment.

We have found that by working in small steps and preparing only what is immediately necessary, we minimize rework (failure demand) because we do not perform "speculative" work. This significantly simplifies backlog prioritization and management, allowing us to focus on essentials and save considerable effort.

In summary: Adopting a just-in-time approach to do only what is necessary has led us to a more efficient process, with less rework and more agile backlog and priority management.


While it is impossible to eliminate all documentation or bureaucracy associated with safety regulations and certifications, it is possible to address these requirements creatively to avoid additional work. In our case, we have adopted Trunk-Based Development. Every commit or push includes co-authors and triggers a series of exhaustive tests. This methodology not only satisfies auditors but is, in fact, more effective than traditional asynchronous review methods (feature-branching + PRs) and the need for explicit approvals to move forward to production.


Functionalities and Extra Code

This, along with partially completed work, is the most significant waste we see in software development and product creation. Far too often, software developed over months ends up unused or avoided by users because it fails to meet their expectations. This is the WASTE in software development. As Mary Poppendieck said, "The biggest cause of failure in software-intensive systems is not technical failure; it's building the wrong thing."

In addition to applying Lean Software Development, we must use other techniques to truly understand our users' needs and uncover which problems are worth solving. Tools like Lean Product Management, Continuous Discovery, and Impact Mapping are essential in this process, though we won’t detail them in this series of articles.

Assuming we have identified a problem worth solving and that we have customers/users with a clear need, our goal is to solve this problem/need with the least amount of software possible and as quickly as we can. In our case, we use the following practices:
  • We adopt the Agile principle of “Simplicity—the art of maximizing the amount of work not done—is essential.”
  • We see software as a means, not an end, aiming to solve needs with as little software and technology as possible.
  • We focus on customer value, ensuring that every initiative and functionality aligns with the real needs of users.
  • We delay technical and product decisions as much as possible, increasing the chances of never having to implement them or at least not implementing them fully. We always aim for the minimum version that is sufficient.
  • We employ Outside-In TDD, which ensures we only write the minimum code necessary to implement the use case.
  • We follow the YAGNI principle (You Aren’t Gonna Need It), focusing on the functionality required now, avoiding speculative design or development.
  • When something we’ve developed stops being used or doesn’t fulfill its objective, we either remove it entirely or adapt it until it has a positive impact again.
  • We work in very small steps (<1.5-2 days), presenting new increments to users to receive quick feedback that allows us to adapt and decide on the next steps. This often lets us stop investing in an application’s functionality when it’s “good enough” for the user, thus avoiding unnecessary development.

Task Switching

Frequent task switching can significantly disrupt a team’s productivity. Each change forces the mental process to restart, delaying re-entry into the “flow” state of work. To minimize these switches, we apply several strategies:

  • Minimizing WIP: The most effective strategy to prevent frequent task switching is to reduce the team’s Work in Progress (WIP). We strive to focus on one or at most two initiatives simultaneously. Ensemble/mob programming is our preferred technique to limit WIP, as when the entire team focuses on a single task, internal interruptions are naturally eliminated.
  • Continuous and Synchronous Code Reviews: By working in ensemble/mob programming, we eliminate all task switches generated by asynchronous code reviews. See Code reviews (Synchronous and Asynchronous).
  • Vertical Slicing and Technical Slicing: By rigorously applying these techniques, we can work on truly small increments. This helps us maintain workflow continuity until completing and deploying an increment. Naturally, after each deployment, the possibility of task switching arises without the negative impact of doing so mid-increment.
  • Task Completion and Spikes: We ensure tasks can be completed from start to finish. If we see this isn’t possible, we conduct a spike (http://www.extremeprogramming.org/rules/spike.html) to eliminate uncertainty or look for other approaches that don’t require interruptions.
  • Pomodoro Technique: We use Pomodoros for periods of focused work and synchronized team breaks.
  • Quality at Every Level: High quality prevents interruptions caused by failures. We apply TDD, ensemble/mob programming, and other Extreme Programming practices to maintain it.
  • Operations/Support Rotations: For teams with support functions, we implement rotations, concentrating part of the team on emergent work and the rest on planned initiatives.

Waiting

When we analyze the product development process in depth, the most common finding is that every increment/idea/backlog item spends almost all of its time waiting. Waiting for answers to questions, waiting to analyze the problem more thoroughly, waiting for feedback on the design, waiting for architectural change approvals, waiting for certain specialists to be available, waiting for someone to approve the change, waiting for code reviews, waiting for the feature toggle to be activated, waiting to communicate the change... Waiting, waiting, waiting. Clearly, if we view the process from the customer/user's perspective, any type of waiting is simply waste.

To eliminate much of this waiting, here are some tactics that have worked for us in the past:
  • Assign the team end-to-end responsibility for going from problem definition to production and operation of the solution. If possible, even give them the freedom to find the problem worth solving. This means taking charge of product management, development, quality, deployment, and operations.  
  • Even when the team is empowered, it sometimes lacks all the necessary skills. In these cases, we need to secure collaboration from a specialist, but always try to have the specialist help us improve our skills in that area instead of solving the problem for us. This won't cover all cases, but it will ensure that in simpler cases, we don't need to call on the specialist again.  
  • On the other hand, the more multidisciplinary the team members are, the easier it will be to meet needs within the team itself. This doesn't mean everyone knows everything, but rather that we promote T-shaped skills (https://en.wikipedia.org/wiki/T-shaped_skills).  
  • On a technical level, the way to systematically eliminate most waits is to move toward Continuous Delivery (CD), which typically involves placing a lot of emphasis on agile technical practices (TDD, CI, decoupling deployment from activation, etc.) and having very high confidence in our automated testing.  
  • One of the most efficient ways (flow efficiency) is to work in mob/ensemble programming so that all the available knowledge and skills are fully dedicated to the single ongoing initiative (on which the mob/ensemble is working).  
  • There’s no point in releasing to production early if we then passively wait for customer/user feedback. It's much more efficient to seek that feedback proactively and to have instrumentation at the product and system levels to learn as quickly as possible.  

Movement  

Another of the seven basic wastes considered by Lean Manufacturing is movement. In this context, it is evident that the movements an operator must perform in a factory, whether between machines, to pick up materials, or to make inquiries, are a clear waste. In the case of Lean Software Development, the category of movement was also retained, even though in our domain, this type of waste is not as direct and obvious.

In the original book, movement refers to the effort required to access the customer, obtain domain information, carry out hand-offs between specialties (ops, QA, security), etc.

Many of the tactics used to eliminate waits are also valid for reducing the waste of movement, especially regarding hand-offs between specialties.

Additionally, to eliminate other types of movements, the following strategies have proven useful:

  • Provide the team with direct access to the customer/end user or their closest representative. A practical solution could be to take on operations responsibility, so the team is directly exposed to the complaints and needs of customers/users.  
  • Develop specific tools that allow direct and efficient access to necessary information, avoiding repeated processes (e.g., data extraction tools, observability tools, etc.).  
  • Create information radiators so that it’s easy to visualize progress or relevant information without the need to actively search for it (visual management boards, automated notifications, etc.).  

Defects  

Lastly, Lean Software Development considers defects as a significant source of waste. From my experience, I would say that defects are the second most important source of waste after performing unnecessary activities (extra features/code). Although, if you think about it, not doing what the client needs could also be considered a specific type of defect :).  

Every defect we make not only generates the waste of the time spent creating that incorrect code but also the time spent fixing it, the impact on our credibility with the client/user, and all the effort from the creation of the problem to its resolution. Therefore, it is not only important to avoid generating defects but also to find them as soon as possible, since the waste/cost associated increases exponentially the longer it takes to detect the problem.  


With this in mind, the tactics and practices we usually use to minimize this waste are:  

  • Minimizing as much as possible the code we need to develop to achieve the desired impact. As you know, less code implies fewer opportunities to make mistakes.  
  • Using Outside-In TDD, starting with acceptance tests for the use case. This, by definition, generates the minimal amount of code possible, which is also well-tested from the outset.  
  • This process does not cover all scenarios and issues, so it is also necessary to create certain end-to-end tests and have strategies for specific topics such as security analysis, load testing, performance testing, etc.  
  • Another important point is testing the third-party components we use to avoid problems when updating versions or using them in non-standard ways. See Thin Infrastructure Wrappers.  
  • With all the above points, we have a good starting point, but it is increasingly common to rely on third-party infrastructure and services (SaaS, clouds, etc.). In these cases, it is more essential than ever to use production testing tactics. After all, our clients/users don’t care what the source of the problem was; they only care about the impact it had.  

Conclusions

As can be seen in the tactics and practices we employ to minimize waste, many of them relate to having solid development practices (pair programming, TDD, BDD, continuous reviews, CD, CI, etc.) that allow us to develop sustainably. Others focus on avoiding unnecessary tasks as much as possible, concentrating on what the client truly values (which doesn’t always align with what they ask for), limiting the software to current needs, and working in very small steps so we can change direction or stop investing in something as soon as necessary.  

Working in such small steps and adapting continuously allows us to streamline the required process: we need little backlog management if we have very little in it; there’s no need to coordinate different workflows if we’re all working on the same thing at the same time; there’s no need to structure communication or handoffs with other teams if we manage them ourselves. In the end, it’s about simplifying everything as much as possible to do only what is absolutely necessary, always focusing on what truly adds value. This obviously involves constantly questioning what we do and how we do it. Just because something was useful a couple of months ago doesn’t mean it still is.  

It’s not as simple as it seems, as it requires deep engagement in our work (passion) and, at the same time, the ability to let go of what doesn’t add value (detachment). It’s about living focused on a sliding window of what adds value now, of what is useful to us in the present.  

Remember that eliminating waste is just the first step on the path to Lean Software Development. In our next post, we’ll explore how to “Amplify Learning” to ensure the excellence of our products. See you soon!