Sunday, February 09, 2025

Good talks/podcasts (Feb I)

 

These are the best podcasts/talks I've seen/listened to recently:
  • Patterns of Effective Delivery (Dan North) [Agile, Engineering Culture, Inspirational] [Duration: 00:59] (⭐⭐⭐⭐⭐) This talk explores patterns of effective software delivery, emphasizing that delivery means solving problems, not just writing code, and focusing on optimizing for the right outcomes rather than just process
  • Dantotsu Radical Software Quality Improvement (Fabrice Bernhard) [Inspirational, Lean Software Development, Quality, testing] [Duration: 00:37] This presentation covers how to apply the Dantotsu method—rooted in Toyota’s manufacturing principles—to optimize software development and delivery. By focusing on visual management, team leadership, and systemic solutions, the approach minimizes defects, boosts efficiency, and fosters continuous improvement. Very good ideas on how to improve post-mortems or how to classify problems to improve quality more quickly.
  • Beyond Engineering: The Future of Platforms (Manuel Pais) [Flow, Platform as a product, leadership] [Duration: 00:21] This talk explores applying the "platform as a product" approach beyond engineering, emphasizing how to improve flow and reduce friction across an organization by providing internal services in a self-service manner, and by focusing on the needs of the organization's internal users
  • The Most Dangerous Phrase: SOLID, Scrum And Other Antiques (Dan North) [Agile, Engineering Culture, Management, Technical Practices] [Duration: 00:38] (⭐⭐⭐⭐⭐) This presentation challenges the idea of blindly following software development practices like SOLID and Scrum, urging a re-evaluation of their continued relevance in light of changing contexts and technology, advocating for a more fluid, context-driven approach that prioritizes outcomes, learning, and continuous improvement
  • How Shopify builds a high-intensity culture (Farhan Thawar, Lenny Rachitsky) [Engineering Culture, Product Engineer, leadership] [Duration: 01:40] (⭐⭐⭐⭐⭐) This talk explores how Shopify builds a high-intensity culture through principles like choosing the hard path, prioritizing intensity over hours, and valuing pair programming, while also emphasizing continuous learning, code deletion, and a unique approach to hiring.
Reminder: All of these talks are interesting, even just listening to them.

The talks and podcasts that I have rated as five stars are also available on the following website:
Related:

Wednesday, February 05, 2025

Bilbostack 2025: Mi crónica de la conferencia

El pasado 25 de enero asistí a Bilbostack 2025 en Bilbao, una de mis conferencias tecnológicas favoritas. A pesar del clima encapotado y con algo de lluvia —típico en Bilbao por estas fechas—, el ambiente fue inmejorable y siempre es un placer volver a mi tierra para visitar a la familia. 


Charlas

A continuación nos listo algunas notas de las charlas a las que asistí.


¿Y si “hacer lo correcto” no fuera lo correcto? (Jordi Martí)

Esta charla fue la que más me impactó de toda la Billbostack. Jordi lanzó un mensaje claro y necesario sobre las dinámicas de poder y la dificultad de lograr una verdadera inclusión en equipos técnicamente excelentes que siguen prácticas ágiles. Explicó cómo este modelo de trabajo, que consideramos ideal, puede, en realidad, dificultar la incorporación de personas con experiencias y puntos de vista diferentes, especialmente si no están acostumbradas a esta forma de trabajar y no les damos el espacio para aportar, aprender y equivocarse. 
A menudo, quienes llevamos tiempo en el sector defendemos con vehemencia estas metodologías, seguramente por miedo a volver a sufrir la angustia que teníamos al principio de nuestras carreras. Jordi propone complementar el haz la cosa correcta de la forma correcta con hazlo de forma sostenible para las personas. 
Nos invitó a reflexionar sobre cómo nosotros aprendimos mediante prueba y error, pero ahora muchas veces negamos ese mismo espacio a quienes llegan. Personalmente, me tomo muy en serio su mensaje. Entiendo que la base de todo debe ser, ante todo, el respeto a las personas, tal y como indica Lean, y que debemos generar un entorno donde puedan crecer, sentir que aportan e incluso equivocarse como parte natural del aprendizaje.
El equilibrio ideal es un equipo abierto e inclusivo, con un entorno seguro en el que la gente pueda experimentar, aprender y contribuir sin verse limitada por los dogmas preestablecidos. 

Esto en cuanto al contenido de la charla, pero, además, la forma de transmitirlo fue insuperable. No voy a hacer spoiler por aquí, porque tengo la esperanza de que la repita en algún otro evento. Solo puedo decir que yo, de mayor, quiero poder transmitir las ideas como Jordi. 

Fatal, gracias (Irene Morgado)

Continué con esta interesante charla de Irene, centrada en la toxicidad y los problemas que a menudo encontramos en las empresas, pero que no siempre somos capaces de identificar. Irene fue muy didáctica al explicarnos cómo funcionan ciertos procesos en nuestro cerebro, qué tipos de comportamientos o dinámicas solemos repetir y cómo podemos trabajar en hacernos más conscientes de lo que vivimos día a día en el trabajo. 
Todo ello con el objetivo de mejorar nuestras relaciones, nuestra mentalidad y, sobre todo, aprender a identificar problemas o toxicidades que muchas veces hemos normalizado y aceptamos sin cuestionar. Una charla muy valiosa para la reflexión y la toma de conciencia. 

Culture Driven Development, el motor de un equipo rápido, efectivo y sostenible (Sebi Collell y Censu Karayel)

Interesantísima charla donde Sebi y Censu describen la cultura Agile/Lean y el impacto que han conseguido en Tech93/Cooltra. Nos mostraron los valores y principios en los que basan su cultura y nos dieron muchísimas pistas sobre las prácticas que utilizan. Se centran mucho en el impacto (Outcome) de todas las iniciativas y, algo menos común pero que considero fundamental, es que han desarrollado una cultura muy sana de experimentación y, sobre todo, de eliminación de cualquier iniciativa superflua o que no esté alineada con el foco actual. Explicaron en detalle cómo es todo su proceso de producto, y no puedo más que recomendar a todo el mundo que intente enterarse y aprender cómo están haciendo las cosas. 
Su enfoque es muy Lean, tanto en producto como en desarrollo, minimizando el desperdicio y manteniendo un foco constante en el impacto de negocio. Especialmente valiosa me pareció la práctica del framing, donde acotan y definen el problema, estableciendo el impacto esperado en el negocio. También destacaron los diferentes puntos en el proceso donde pueden detener iniciativas para evitar el desperdicio de trabajar en algo innecesario. 
Si algo puedo decir de esta charla es que me quedé con ganas de más, pero el tiempo es limitado. Así que solo espero que sigan compartiendo tan generosamente sus aprendizajes y su forma de trabajar con la comunidad. 
Como complemento a esta charla, os dejo por aquí algunas otras donde se puede ver, desde distintos puntos de vista, la manera en que trabajan: 

Cuando los robots aprenden a hablar: Historias de fábricas inteligentes y data streams (Fernando Díaz)

Por último, pude disfrutar de la charla de Fernando Díaz sobre lo que ha aprendido como desarrollador de software tras adentrarse, en el último año y medio, en un nuevo producto cuyo núcleo es el procesamiento de datos en streaming y en tiempo real. En este caso, la charla no fue una sorpresa para mí, ya que había hablado previamente con Fernando sobre el tema. De hecho, él ya me había compartido la idea de la charla y las diapositivas. Además, es un tema que me toca de cerca, porque en Clarity AI, además de los equipos a los que ayudaba antes, ahora también me estoy encargando de la plataforma de datos, por lo que empatizo totalmente con con las complejidades y desafios que se ha encontrado viniendo de un background de software engineer.
La charla me gustó mucho por dos motivos. Primero, porque, habiendo trabajado en sistemas industriales con información en tiempo real, creo que este es un entorno poco conocido para muchas personas en la profesión, pero al mismo tiempo, muy interesante y divertido. Y segundo, porque Fernando describió magistralmente las complejidades específicas del mundo del dato en tiempo real. Cuando vienes de un background como software developer en sistema de información en muchos casos con interfaces web, el mundo de dato te presenta otras complejidades nuevas a las que tienes que adaptarte. 



Creo que Fernando reflejó muy bien cómo algo que parece tan sencillo, como un pequeño cálculo —por ejemplo, una resta entre dos cantidades—, se vuelve considerablemente más complejo en el mundo de los datos en tiempo real. A este tipo de problemas se suman múltiples dimensiones de dificultad a las que no estamos acostumbrados, como datos desordenados, dificultad para correlacionarlos, problemas de escalabilidad, latencias, estado efímero, entre otros. 

Postcharlas (Networking)

Si las charlas de la BilboStack son muy potentes, creo que uno de sus grandes puntos fuertes es el networking posterior. Al final, se trata de una conferencia en la que solo hay ponencias por la mañana, y el resto del día—e incluso en muchos casos la noche—se dedica a hacer networking con el resto de los asistentes. Tal como en los últimos años, los organizadores aprovecharon la explanada de la parte de atrás del Palacio Euskalduna, junto a la entrada del Museo Marítimo para preparar una zona en la que poder disfrutar de comer y beber algo mientras compartes experiencias con el resto de asistentes. 
A diferencia del año pasado, en esta ocasión tuvimos un día típico de Bilbao: bastante nublado, con algún chubasco intermitente, pero que no nos impidió disfrutar de todo lo que los organizadores habían preparado.



Hubo txosnas, música en directo, aizkolaris  y, sobre todo, muchas risas y mucha conversación. Como otros años, disfruté muchísimo de todos los reencuentro y las conversaciones. Me quedé con ganas de hablar más tiempo con algunas personas, pero el tiempo es limitado.
Podría listar aquí a toda la gente con la que hablé y los temas que tratamos, pero recuerdo en especial una conversación intensa con parte del equipo de Tech93 y Cooltra (Xavi Ghost, Alex Fernández y Javier Salinas entre otros) sobre cómo trabajaban, el enfoque de desarrollo de producto lean y el proceso que seguian. Profundizamos bastante en cómo usar el coste de retraso para calcular el impacto de las iniciativas, viendo estrategias para como tratar los distintos perfiles de urgencia del coste de retraso
Me lleve un montón de nuevas ideas y temas para profundizar.

Lo dicho, como siempre, el networking de la BilboStack es insuperable. Aprendes, compartes, te ríes… no sé qué más se puede pedir. 

Conclusiones 

Un año más, Bilbostack cumplió (e incluso superó) mis expectativas. La mezcla de charlas de calidad, un ambiente cercano y la oportunidad de disfrutar de un fin de semana en Bilbao —visitando a mi familia— hacen de esta conferencia una cita imprescindible. 
Recomiendo a cualquiera que aún no haya asistido que se anime en la próxima edición: no se arrepentirá. 

No sé si lo habré conseguido pero si con este post no te dejo con ganas de apuntarte el año que viene a ir a la Bilbo stack no sé qué más puedo hacer. :)

Monday, January 13, 2025

Developing Software: Postponing Decisions and Working in Small Steps

Translated from the original article in Spanish Desarrollando software: posponiendo decisiones y trabajando en pasos pequeños

In this article of the series on Lean Software Development, after exploring practices for postponing decisions in the product, today we will discuss how to develop software by taking very small steps, delaying decisions, and doing only what is necessary at each moment.

This approach aligns with the principles of Lean Software Development and eXtreme Programming (XP), being a key part of agile development.

Why Work in Small Steps

Working in small steps is essential in uncertain environments. Neither we nor the client always know exactly what is needed to achieve the desired impact. By progressing in small increments, we obtain valuable feedback both from the system—on its functionality and behavior—and from the client. This approach allows us to learn and constantly adjust, avoiding premature decisions that could limit our options or be difficult to reverse.

It is a continuous learning process where we avoid speculative design and unnecessary features. By moving forward step by step, we accept that we do not know everything from the outset and choose to experiment and validate constantly.

Benefits of Working in Small Steps

Working in small steps with continuous feedback offers numerous benefits. Geepaw Hill, in his article "MMMSS: The Intrinsic Benefit of Steps," brilliantly describes the effects of this practice on teams. Below is a summary, though I recommend reading the full article or the series "Many More Much Smaller Steps."

Geepaw mentions eight benefits of working in steps of less than a couple of hours:

Benefits in Responsiveness:  

  • Interruptibility You can handle interruptions or change focus without breaking the workflow.  
  • Steerability After each small step, you can reflect, incorporate feedback, and adjust the direction if necessary.  
  • Reversibility If a step does not meet expectations, reverting it results in minimal time loss.  
  • Target Parallelism By advancing in consistent small steps, it is possible to work on different areas of the system or for different stakeholders without leaving tasks half-done.  

Human Benefits:

  • Cognitive Load: Forces you to reduce cognitive load by limiting the combinations and cases you must consider.  
  • Pace: Establishes a steady team rhythm with cycles of quick rewards (successful tests, commits, deployments, etc.).  
  • Safety: Small changes carry less risk than large ones. With frequent tests and daily deployments, the maximum risk is reverting the last change.  
  • Autonomy: Allows the team to make continuous decisions, requiring constant effort to understand and empathize with the user to address problems or implement improvements.  

Working in Small Steps and Postponing Decisions

Since around 2009-2010, I have tried to apply the practice of working in very small steps in all the teams I collaborate with. These steps usually take a few hours, allowing production deployments several times a day and achieving visible changes for the client in one or two days at most. This agile approach minimizes risk and maximizes responsiveness, but it requires discipline and the rigorous application of agile development practices proposed by eXtreme Programming (XP).

Practices and Tactics for Working in Small Steps

Below, I present some practices and strategies that enable us to work this way. Sometimes it’s hard to separate them, as they are closely interrelated and complement each other.

Iterative and Incremental Development

The most important technique we use is also the simplest and, at the same time, the least common. Instead of starting with a complete solution and dividing it into steps for implementation, we progressively grow the solution until it is “good enough,” allowing us to move on and invest in solving another problem. That is, we focus on delivering increments (to the end client) that align with the idea of the solution we are aiming for, all while keeping the solution and the problem in mind. We use feedback to ensure that we are heading in the right direction. Additionally, not being afraid to iterate based on this feedback allows us to work in small, low-risk steps.  


For example, starting from an initial problem with a potential solution, we generate increments (Inc 1, Inc 2, etc.) in less than a day. Each increment is delivered to the user for feedback, which helps us decide the next step and whether the solution is already good enough. This way, we avoid waste (gray area) by not doing unnecessary tasks, thus reducing the system's Basal Cost.

https://x.com/tottinge/status/1836737913842737382

Vertical Slicing

Vertical slicing involves dividing functionalities and solutions in a way that allows for an incremental approach to development, where each small increment provides value in itself. This value can manifest as user improvements, team learning, reduced uncertainty, among others. Instead of splitting stories by technical layers (infrastructure, backend, frontend), we divide them into increments that deliver value and typically require work across all layers.

In my teams, we apply this vertical slicing rigorously, ensuring that no increment takes more than two days, and preferably less than one day. We use various heuristics and processes for vertical slicing (https://www.humanizingwork.com/the-humanizing-work-guide-to-splitting-user-stories/), such as the “hamburger method” by Gojko Adzic, which I will describe later.  

Even though we use this vertical slicing to break down what we want to implement into increments, this doesn’t mean we always implement all the identified increments. On the contrary, the goal is always to grow the solution as little as possible to achieve the desired impact.

Technical Segmentation

As a complement to vertical slicing, in my teams, we also divide these increments that deliver value to the user into smaller tasks, which we also deploy to production. These tasks are more technically focused and usually take less than two or three hours.

Deploying these technical increments allows us to obtain feedback primarily from the system: Does our CI pipeline continue to work well? Does the deployed code cause any obvious problems? Does it affect performance in any way?

This practice forces us to maintain a low deployment cost (in terms of time and effort) and allows us to ensure that the workflow continues to operate correctly at all times. This is possible because we have a solid automated testing system, fast CI pipelines, and we work with Continuous Integration/Trunk-Based Development, as we will explain later.  

Being able to apply this technical segmentation is also essential for making parallel changes, implementing significant modifications in small and safe steps, and thereby significantly reducing risk.

Generating Options

Generating options is essential for making well-founded decisions. Every decision should consider multiple alternatives; we usually try to have at least three or four. To facilitate the generation of options, we can ask ourselves questions such as:  

  • What other options would you consider if you had half the time?  
  • Which options require new dependencies?  
  • What solutions have you implemented in similar problems in the past?  
  • What is the minimum degree of sophistication required for the solution?  
  • Who could benefit from the change? Could we deliver it to each user group independently?  

These questions help us generate options that the team can then evaluate, always trying to select those that quickly provide value (learning, capacity, uncertainty reduction, etc.) while committing as little as possible.  

This way of working allows us to move forward in small steps, always maintaining visibility over the different options we can take to continue addressing the problem or redirect it if the steps taken aren’t achieving the desired impact. As you can see, everything converges into working with small advances, learning, making decisions as late as possible, and striving for the simplest solutions.

One tool we use often for generating options and performing vertical slicing is the “hamburger method” by Gojko Adzic.  

With this method, we aim to divide a functionality or solution into the steps necessary to provide value to the user. These steps are visualized as “layers” of the hamburger, and for each layer, we force ourselves to generate at least three or four options. Then we select at least one option from each layer to decide which will be the first increment to implement. Once that first increment is implemented and delivered, and with user feedback in hand, we repeat the process to implement one of the other options.  

This continuous process doesn’t end when we implement all the identified options, but when the functionality is good enough, or there is another functionality or higher-priority problem to invest in. In other words, we invest in what’s most important until the user is satisfied or until a new priority arises. 


Simplicity 

Simplicity is one of the core values of XP (eXtreme Programming) and, by extension, of well-understood agility. A mantra of agile development is, “Do the simplest thing that could possibly work.” This means starting with the simplest, minimal solution that works, iterating, and improving based on feedback.

The simplest solution is not always the easiest to implement. Sometimes, avoiding unnecessary complexity requires significant effort. True simplicity is the result of conscious design that evolves gradually.

Two-Step Development 

Kent Beck advises us to “Do the simplest thing that could possibly work,” but this is often confused with “the first thing that comes to mind” or “the only thing I know how to do.” An effective way to ensure we are choosing the simplest option possible is to divide any change or increment into two parts:  
  1. Preparation: Adjust the current codebase so the new functionality can be introduced easily.  
  2. Implementation: Introduce the actual change.  
https://x.com/eferro/status/1810067147726508033

This separation avoids speculative design and ensures that only the minimum necessary changes are made to integrate the new functionality, following Kent Beck’s principle:  
“Make the change easy, then make the easy change.”



https://twitter.com/KentBeck/status/250733358307500032

YAGNI (You Aren't Gonna Need It)

Closely related to the above point, the YAGNI principle reminds us that many ideas we come up with will likely never be needed. It encourages us to focus only on what we need *now* and helps us avoid speculative design, keeping us focused on what is truly relevant at the moment.  

Even when we identify something that might be needed in the near future, YAGNI prompts us to question whether it is truly essential for current needs, reminding us to postpone it. If the system is simple and easy to evolve, it will be easy to introduce those changes later.

Test-Driven Development (TDD) and Outside-In TDD 

Test-Driven Development (TDD) is a practice that involves writing a test first to define the desired behavior of a functionality, before writing the code to implement it. From there, the developer writes the minimum code necessary to pass the test, followed by a refactoring process to improve the code design without changing its behavior. This cycle is repeated continuously, ensuring that every line of code has a clear and defined purpose, avoiding unnecessary or superfluous code.  

Outside-In TDD is a variation of TDD that starts from the broadest business use cases and works its way inward to the system's implementation. By starting from business needs and writing only the code necessary to pass each test at each level (from the highest functional level to the individual pieces of code), this approach ensures that only essential code is created. It prevents unnecessary code or features that are not currently required, avoiding speculative design and adhering to the YAGNI principle.

In our team, we use Outside-In TDD as the default workflow for all new code, except in cases where this flow isn’t beneficial (e.g., spikes, complex algorithms, etc.). This means that approximately 5-10% of the code may be experimental for learning purposes, which is discarded afterward and typically isn’t tested. Another 10% corresponds to tasks where tests are written afterward (e.g., library integrations or complex algorithms). The remaining majority of the code is developed using Outside-In TDD.  

This approach minimizes waste and inherently adheres to the YAGNI principle since no code or design is created that doesn’t align with the current increment. As the current increment is defined through radical vertical slicing, we work in small steps, with minimal waste, and make decisions as late as possible.

An additional advantage of this process is that it facilitates quick error resolution, both in code and design, as progress is constantly verified step by step. When an error is detected, it is most likely in the last test or the last change made, allowing for quick and stress-free recovery.

Continuous Integration (Trunk-Based Development)

If there is one technical practice that forces and helps us work in small steps, with constant feedback, enabling us to decide as late as possible while learning and adapting at maximum speed, it’s Continuous Integration (CI).  

First, it’s important to clarify that Continuous Integration is an XP practice in which all team members integrate their code into a main branch frequently (at least once a day). In other words, this practice is equivalent to Trunk-Based Development, where there is only one main branch on which all developers make changes (usually in pairs or teams).  

This practice has nothing to do with running automated tests on feature branches. In fact, I would say it is directly incompatible with working on separate branches for each functionality.  

Unfortunately, this approach is not the most common in the industry, but I can assure you that, along with TDD, it is one of the practices that has the most impact on teams. In every team I’ve worked with, the introduction of Continuous Integration/TBD has caused a spectacular change. It has forced us to work in very small (but safe) steps, giving us the agility and adaptability we sought.  

Of course, like any practice, it requires effort and the learning of tactics to frequently deploy to production without showing incomplete functionalities to the user. It’s necessary to master strategies that separate deployment (technical decision) from the release to users (business decision). The most common strategies are:  
  • Feature toggles: Allow features to be turned on or off, perform A/B testing, or show new features only to certain clients (internal, beta testers, etc.).  
  • Gradual deployment: Methods like canary releases or ring deployments allow for a progressive rollout of changes.  
  • Dark launches: Launch a feature without making it visible to the client, only to perform performance or compatibility tests.  
  • Shadow launches: Run a new algorithm or process in parallel with the old one, but without showing results to the end user.

Evolutionary Design  

This central XP practice allows us to develop software incrementally, continuously refactoring the design so it evolves according to business needs. In practice, it involves creating the simplest possible design that meets current requirements and then evolving it in small steps as we learn and add new functionalities.

Within evolutionary design, tactics include:  
  • Two-step development.
  • Continuous refactoring in the TDD cycle.  
  • Opportunistic refactoring.  
  • Avoiding premature abstractions ([See: https://www.eferro.net/2017/02/applying-dry-principle.html)).  
  • Parallel changes to keep tests green while making multi-step changes.  
  • Branch by abstraction and the Expand/Contract pattern to facilitate parallel changes.

It’s important to note that beyond the tactics you use to guide the design in small steps, it’s essential to develop a sense of design within the team. None of these practices alone teach object-oriented design. Therefore, the team must not only learn to make incremental design changes but also acquire a deep understanding of object-oriented design principles.

Differentiated Evolutionary Design

In general, in my teams, we always try to work in small steps, focusing on what we need at the moment and letting new needs guide changes in architecture and design. At the same time, we recognize that the ease of evolution and the friction generated by change depend heavily on the type of code being affected. We know that modifying code that implements business rules, an internal API between teams, or a customer-facing API are not the same in terms of friction.


Each of these cases involves varying degrees of friction to change (i.e., different levels of ease of evolution). Therefore, we apply a differentiated evolutionary design approach based on the type of code.  

For code with higher friction to change, such as a customer-facing API, we dedicate more time to a robust design that allows for evolution without requiring frequent changes. Conversely, for internal business logic code that is only used in specific cases, we adopt a more flexible evolutionary approach, allowing the design to emerge naturally from the development process.



Other Tactics and Practices

Of course, these are not the only tactics and practices to consider, but I do believe they are the ones that help us the most. Here are some additional tips and heuristics that, while not full-fledged practices in themselves, contribute to decision-making and generally make it easier to work in small steps and postpone decisions as much as possible:  

  • Prioritize libraries over frameworks to avoid locking in options and maintain greater flexibility.  
  • Focus on making code usable (and understandable) rather than reusable, unless your business is selling libraries or components to other developers.  
  • Use solid, “boring” technology that is widely accepted by the community.  
  • Create thin wrappers over external components/libraries to clearly define which parts of a component are being used and to facilitate testing. You can learn more about this approach at https://www.eferro.net/2023/04/thin-infrastructure-wrappers.html.  
  • Separate infrastructure from business code through Ports and Adapters or another architecture that clearly differentiates them.  
  • Apply evolutionary architecture, starting with a minimal architecture and adapting it to business needs, postponing hard-to-reverse decisions as much as possible.

Conclusions

In software development, the key lies in adopting a conscious approach to our decisions, working in small, low-risk steps, and focusing solely on what we need now. Simplicity and clarity must be priorities to maximize efficiency and minimize waste.  

The practices of eXtreme Programming (XP), together with the principles of Lean Software Development, provide us with a clear guide to avoid waste and over-engineering. Understanding that we cannot predict the future with certainty, we focus on building systems that are easy to understand and evolve, avoiding unnecessary complexity. Working this way means steering clear of oversized or highly configurable solutions, which often become obstacles to system evolution.  

Ultimately, it’s about being humble: acknowledging that we don’t have all the answers and that the only way to find the right solution is through experimentation and continuous learning. In short, simplicity, agility, and responsiveness are fundamental to developing software effectively in an ever-changing environment.  

If I had to choose the techniques and practices that have the greatest impact on my teams for working in small, safe steps and postponing decisions, I would say they are:  
  • Vertical slicing
  • Continuous Integration / Trunk-Based Development  
  • TDD (Test-Driven Development)  
All with a constant focus on simplicity.  

Each of the practices and tactics mentioned in this article is broad and could be explored in greater depth. I would love to know if there is interest in delving into any of them further, as it would be enriching to explore in greater detail those that are most useful or intriguing to readers.  

References


Friday, January 03, 2025

Decide as Late as Possible: Product Limits

 Translated from the original article in Spanish https://www.eferro.net/2024/07/decidir-lo-mas-tarde-posible-limites-de.html

As we mentioned in the previous article in the series on Lean Software Development, we will continue exploring techniques that allow us to make decisions as late as possible.

We begin by systematically defining Product Limits.

When developing an increment of the solution we are implementing, it is essential to establish concrete limits on all parameters that might introduce complexity. This allows us to focus on what provides value now and postpone more sophisticated solutions, avoiding additional costs and complexity. Over time, these limits will evolve and force us to modify the solution, but this approach allows us to delay each decision and avoid the cost of developing and evolving that more complex solution until absolutely necessary.

It is crucial that when defining a limit, it is incorporated into the code or solution so that, if exceeded, the application behaves in a controlled manner, alerting the team and possibly the user.

Examples of Limits I Have Used:

  • Total number of customers/users.
  • Number of concurrent customers/users.
  • Maximum file sizes that can be uploaded to the system.
  • Quotas per user (storage, requests, number of entities, etc.).
  • Numeric values for any business concept (in the problem domain).
  • Response times for various requests.
  • Resolutions/Devices for the UI.

If we do not clearly define these limits numerically, we open the door to speculative design to address situations we are not yet facing.

Examples of Using Limits to "Decide as Late as Possible"

Knowing and defining the total number of customers on various occasions has allowed me to offer very simple solutions for persistence that were useful for months before requiring changes. For example, if the number of users is small, using file-based persistence and loading the information into memory is feasible. It also allows us to use solutions like SQLite and postpone the decision to introduce a separate database engine.


By limiting the size of requests (in terms of volume) and defining connection scenarios, we can offer robust and simple solutions, processing requests synchronously. This postpones the need for an asynchronous solution. For example, on one occasion, we needed to allow users to upload files to the system; the initial implementation only allowed very small files. This enabled us to create a simple implementation and obtain very quick feedback (in less than 2 days). A few weeks later, once we saw that the functionality made sense, we improved the implementation to support larger files.

In several situations where each user accumulated storage (files/objects), defining a product limit for the total storage for all users, another limit for each user, and another limit to indicate when we needed to start worrying about this issue helped us postpone implementing any control and management measures for this storage until one of the defined limits was reached.

To illustrate the systematic use of these limits with a concrete example, at Alea Soluciones, we launched a new product for managing/controlling a fiber network and the routers of end customers in less than 3 months. We knew our clients at the time had no more than 1,000–2,000 users. We knew the operators of the management system were no more than 2–3 concurrent people. We also knew that to gain more users, our clients often had to deploy fiber to the home or at least visit the user’s home to install the router, meaning growth was limited to 2–5 users per week. With this context, the initial version of the management system was a web server that processed everything synchronously, storing user information in memory and persisting changes to a file as needed. This allowed us to allocate much more time to other system components (integration with fiber headers, remote router configuration systems, router monitoring systems, etc.). Of course, the system evolved, and we improved it, but we always waited until the last responsible moment for each decision to introduce new technology.

Another simple example comes from ClarityAI, where, to create a chat-ops tool in Slack offering some internal platform capabilities, certain limits were defined, both in maximum response times and in the volume of information processed. By defining a high maximum response time (2s) but lower than Slack's supported time for synchronous responses to commands (3s), we were able to postpone implementing an asynchronous solution for quite some time. This application handles information about technical inventory components (code repositories, Docker repositories, etc.) and teams and people. We saw it was easy to define maximums for each of the elements, and in all cases, the maximum was below 1,000 items. These limits allowed us to avoid significant complexity, simply relying on a NoCode backend (Airtable) as the database, which also provided a basic administration frontend. We know perfectly well that when we exceed these limits, we will have to consider a more sophisticated and scalable solution, but postponing that decision has allowed us to develop this application very quickly for over two and a half years.

Related Resources:

Tuesday, December 31, 2024

Unraveling My Thought Journey: Discoveries from a Multi-Year Twitter Log via LLM

These days, it's common to reflect on the past year, but instead of doing that, I decided to try something different: an experiment. As I'm transitioning from Twitter (yes, I don’t think I’ll ever call it X) to Bluesky, I took the opportunity to back up all my tweets since I joined Twitter in 2007. My goal? To experiment and see if I could uncover anything interesting using a large language model (LLM).

After a bit of cleanup, I organized my tweets into two files: one in chronological order and another sorted by engagement (likes + retweets). I then uploaded both files to Google's NotebookLM and started testing various prompts to see what insights I could extract.

I hypothesized that with tweets spanning back to 2007, it should be possible to trace the evolution of my thinking. To my delight, the results exceeded my expectations.

I’ll share the outcome of the experiment here:

My Thinking Evolution Based on My Tweets

Here is a summary of my thinking evolution, as reflected in my tweets:

Key Themes:

  • Technical Exploration: I am passionate about technology and have always enjoyed exploring new tools and techniques. My early tweets reflect this, as I often share links to interesting resources and discuss my experiences with various programming languages and technologies.
  • Python Enthusiast: I have a particular fondness for Python, which has been a constant throughout my tweets. I frequently share Python-related content and participate in Python communities.
  • Shift Towards Agile and Lean: Over time, my focus has expanded beyond technical skills to encompass broader software development principles. I have become increasingly interested in Agile methodologies, DevOps practices, and Lean Software Development. This is evident in my tweets, as I often share content related to these topics and discuss my experiences applying them.
  • Focus on Software Craftsmanship: I value writing high-quality, maintainable code. My tweets often reflect my interest in topics like TDD, refactoring, and clean code. I believe in continuous improvement and strive to become a better software developer.
  • Lifelong Learning: I am a firm believer in lifelong learning and am always eager to expand my knowledge. My tweets often feature links to interesting talks, podcasts, and books that I have found valuable.
  • Social and Personal Insights: While technology is a major focus, my tweets also offer glimpses into my personal life and social views. I occasionally comment on current events, share personal anecdotes, and reflect on work-life balance.


My Thinking Evolution:

  • Early Exploration (2009-2010): In the early days of my Twitter journey, my focus was primarily on exploring various technologies and tools. I was experimenting with different programming languages and approaches, seeking efficient solutions to technical challenges. My tweets from this period reveal a hands-on, problem-solving mindset.
  • Embracing Agile (2011-2013): As I gained more experience, I began to recognize the limitations of purely technical approaches to software development. I became increasingly drawn to Agile methodologies, seeing their potential to foster collaboration, iterative development, and value-driven delivery. My tweets from this period reflect my growing interest in Scrum, TDD, and related practices.
  • Expanding to DevOps and Lean (2014-Present): My understanding of software development continued to evolve, encompassing DevOps practices and Lean Software Development principles. I became fascinated by the idea of streamlining the entire software development lifecycle, from coding to deployment and operations. My tweets from this period show my enthusiasm for topics like continuous delivery, automation, flow, and minimizing waste.
  • Focus on Outcomes and Impact (Recent Years): In recent years, my perspective has matured to prioritize the outcomes and impact of software development over simply churning out features. I am more conscious of the need to deliver real value to users and businesses, minimize waste, and focus on building the right things. This shift in thinking is apparent in my tweets, which often emphasize the importance of user-centric design, data-driven decision-making, and measuring the impact of our work.


Overall, my thinking evolution, as documented in my tweets, demonstrates a journey from a technically focused programmer to a more well-rounded software development professional with a strong grasp of Agile, DevOps, and Lean principles. I am passionate about continuous learning, striving for excellence in my craft, and ultimately, delivering valuable and impactful software.



Monday, December 30, 2024

Conversaciones sobre Desarrollo, Producto e Impacto

Quiero compartir contigo una breve conversación en formato de preguntas y respuestas, donde repaso mi trayectoria profesional y reflexiono sobre la importancia de reducir el Costo Basal del software. Estas notas —preparadas antes de grabar el podcast "Tenemos que hablar de producto"— sintetizan mi visión sobre la industria y cómo intento generar un impacto positivo tanto en el negocio como en la vida de los usuarios. Espero que encuentres algo valioso en estas reflexiones, ya seas un desarrollador experimentado o alguien que está comenzando su camino con enfoque en producto.

1. Trayectoria Profesional

P: ¿Cómo iniciaste tu carrera en tecnología y qué te atrajo inicialmente del desarrollo de software?

R: Mis primeros pasos fueron a mediados de los 80 con una computadora ZX Spectrum. Desde el inicio me fascinó la idea de poder crear cosas a través de un lenguaje de programación. Decidí dedicarme a ello porque soy muy curioso y, creo, también bastante creativo; para mí, los ordenadores eran pura magia. Con el tiempo, tomé decisiones profesionales enfocadas en aprender y comprender cómo funcionan realmente las cosas: Linux, programación orientada a objetos, startups, la nube (cloud), escalabilidad técnica y organizacional, entre otros temas.

P: ¿Cuál dirías que ha sido la lección más importante que aprendiste al enfrentar desafíos en tu carrera?

R: Que lo más difícil siempre son las personas. Además, he aprendido que el mayor desperdicio en el desarrollo de software es hacer lo que no se necesita. Y, lo que es peor, no atreverse a eliminarlo después por miedo o por inercia. Esto impacta negativamente tanto en la calidad del producto como en el entorno de trabajo.


2. El Rol del Developer en Empresas de Producto

P: ¿Cuál es, en tu opinión, el rol esencial de un desarrollador en una empresa de producto?

R: Creo que el rol esencial de un desarrollador es resolver problemas o aportar valor al usuario de manera que también beneficie al negocio. Esto implica hacerlo con la menor cantidad de software y esfuerzos posibles, trabajando en pequeños incrementos y buscando retroalimentación constante. Idealmente, el desarrollador también se involucra en la identificación de problemas y oportunidades.

P: ¿Qué habilidades necesita un developer, además de las técnicas, para realmente aportar valor en un entorno de producto?

R: Las que solemos llamar *soft skills* son cruciales. De forma muy resumida:  

  • Colaboración: saber comunicarse, empatizar y comprender tanto a los clientes como a los colegas.
  • Aprendizaje continuo: entender el negocio, proponer mejores soluciones y adaptarse a nuevos equipos y tecnologías.

P: ¿Cómo ves la importancia de prácticas como DevOps y Continuous Delivery en la creación de productos escalables y sostenibles?

R: Son esenciales para trabajar con incrementos pequeños y lograr una retroalimentación constante. El propósito central de DevOps y la Programación Extrema (XP) es hacer que la Entrega Continua (Continuous Delivery) sea eficiente y viable. Esto nos permite experimentar, validar ideas y adaptarnos rápidamente, siguiendo los principios de Lean Software Development y Lean Startup, popularizados por personas como Mary Poppendieck o Eric Ries.

3. El Costo Basal del Software

P: ¿Podrías explicar brevemente el concepto de "Costo Basal" del software y cómo afecta la capacidad de un equipo para innovar?

R: El Costo Basal del software es el costo continuo que genera una pieza de software simplemente por existir. Esto incluye el mantenimiento, la complejidad añadida al sistema y la carga cognitiva para el equipo. Muchas personas lo comparan con construir un edificio que luego permanece inmutable, pero el software es más como un jardín: crece, cambia y necesita cuidado constante. Mantener funcionalidad irrelevante se convierte en un lastre, limitando la capacidad del equipo para innovar.

P: ¿Qué prácticas clave recomiendas para minimizar el costo basal en un proyecto de software a largo plazo?

R:

  • Aplicar principios de Lean Software Development y Lean Product Development, enfocándose en el impacto máximo con la mínima solución posible (menos código y esfuerzo).
  • Adoptar prácticas técnicas de Extreme Programming (XP), como Outside-in TDD, para escribir solo el código necesario y garantizar alta calidad frente a futuros cambios.

P: ¿Cómo crees que el costo basal influye en las decisiones sobre mantenimiento o eliminación de características?

R: Debería tener un impacto significativo, pero muchas veces se pasa por alto. Las empresas suelen evitar eliminar funcionalidades antiguas por miedo o inercia, incluso cuando ya no aportan valor. Un enfoque de producto consciente evalúa periódicamente cada funcionalidad para justificar su existencia. Si algo no genera retorno ni aprendizaje, es mejor retirarlo para reducir la carga sobre el equipo y el sistema.

4. Consideraciones para Negocio (CEO y Producto) al Comunicarse con Tecnología

P: ¿Qué es lo más importante que debería entender un CEO o líder de producto sobre el desarrollo de software para comunicarse mejor con los equipos técnicos?

R:

  1. Entender el valor de forma integral: no solo se trata de aumentar el retorno, sino de protegerlo y evitar costos innecesarios.
  2. Concebir el software como algo vivo, que evoluciona continuamente, no como un edificio que se construye y queda estático.
  3. Reconocer el Costo Basal del software y cómo gestionarlo estratégicamente.
  4. Valorar al equipo técnico como un aliado clave en las decisiones de producto, especialmente cuando los desarrolladores adoptan una mentalidad de Product Engineers.

P: ¿Qué métricas o indicadores recomendarías revisar en conjunto entre negocio y tecnología para asegurar un desarrollo sostenible y alineado a los objetivos?

R:

  • Métricas Lean como Lead time y Cycle time, además de la cantidad de retrabajo (esfuerzo extra necesario por errores, problemas o decisiones inadecuadas).
  • Tiempo desde la generación de una idea hasta obtener el primer feedback real.
  • Métricas de negocio comprensibles y accesibles para el equipo técnico.
  • Métricas DORA para evaluar la salud del proceso de ingeniería: frecuencia de despliegues, tiempo de recuperación ante fallos, tasa de cambios fallidos, etc.


Conclusiones y Consejo Final

Para líderes de negocio: mi recomendación es que aprendan Lean (Lean Startup, Lean Product Development y Lean Software Development) y adopten sus principios. Esto les permitirá ser más eficientes y sostenibles al buscar valor y gestionar equipos.

Para desarrolladores: recordar siempre que la tecnología es un medio, no un fin. El foco debe estar en el impacto que generamos para el negocio, pero de una manera sostenible a largo plazo: Build the right thing, and build the thing right.


Por último, para todos: la parte difícil siempre son las personas y la colaboración. Invirtamos en mejorar esto, porque es lo que realmente hace la diferencia. ¡Hagamos que cuente!