Tuesday, December 31, 2024

Unraveling My Thought Journey: Discoveries from a Multi-Year Twitter Log via LLM

These days, it's common to reflect on the past year, but instead of doing that, I decided to try something different: an experiment. As I'm transitioning from Twitter (yes, I don’t think I’ll ever call it X) to Bluesky, I took the opportunity to back up all my tweets since I joined Twitter in 2007. My goal? To experiment and see if I could uncover anything interesting using a large language model (LLM).

After a bit of cleanup, I organized my tweets into two files: one in chronological order and another sorted by engagement (likes + retweets). I then uploaded both files to Google's NotebookLM and started testing various prompts to see what insights I could extract.

I hypothesized that with tweets spanning back to 2007, it should be possible to trace the evolution of my thinking. To my delight, the results exceeded my expectations.

I’ll share the outcome of the experiment here:

My Thinking Evolution Based on My Tweets

Here is a summary of my thinking evolution, as reflected in my tweets:

Key Themes:

  • Technical Exploration: I am passionate about technology and have always enjoyed exploring new tools and techniques. My early tweets reflect this, as I often share links to interesting resources and discuss my experiences with various programming languages and technologies.
  • Python Enthusiast: I have a particular fondness for Python, which has been a constant throughout my tweets. I frequently share Python-related content and participate in Python communities.
  • Shift Towards Agile and Lean: Over time, my focus has expanded beyond technical skills to encompass broader software development principles. I have become increasingly interested in Agile methodologies, DevOps practices, and Lean Software Development. This is evident in my tweets, as I often share content related to these topics and discuss my experiences applying them.
  • Focus on Software Craftsmanship: I value writing high-quality, maintainable code. My tweets often reflect my interest in topics like TDD, refactoring, and clean code. I believe in continuous improvement and strive to become a better software developer.
  • Lifelong Learning: I am a firm believer in lifelong learning and am always eager to expand my knowledge. My tweets often feature links to interesting talks, podcasts, and books that I have found valuable.
  • Social and Personal Insights: While technology is a major focus, my tweets also offer glimpses into my personal life and social views. I occasionally comment on current events, share personal anecdotes, and reflect on work-life balance.


My Thinking Evolution:

  • Early Exploration (2009-2010): In the early days of my Twitter journey, my focus was primarily on exploring various technologies and tools. I was experimenting with different programming languages and approaches, seeking efficient solutions to technical challenges. My tweets from this period reveal a hands-on, problem-solving mindset.
  • Embracing Agile (2011-2013): As I gained more experience, I began to recognize the limitations of purely technical approaches to software development. I became increasingly drawn to Agile methodologies, seeing their potential to foster collaboration, iterative development, and value-driven delivery. My tweets from this period reflect my growing interest in Scrum, TDD, and related practices.
  • Expanding to DevOps and Lean (2014-Present): My understanding of software development continued to evolve, encompassing DevOps practices and Lean Software Development principles. I became fascinated by the idea of streamlining the entire software development lifecycle, from coding to deployment and operations. My tweets from this period show my enthusiasm for topics like continuous delivery, automation, flow, and minimizing waste.
  • Focus on Outcomes and Impact (Recent Years): In recent years, my perspective has matured to prioritize the outcomes and impact of software development over simply churning out features. I am more conscious of the need to deliver real value to users and businesses, minimize waste, and focus on building the right things. This shift in thinking is apparent in my tweets, which often emphasize the importance of user-centric design, data-driven decision-making, and measuring the impact of our work.


Overall, my thinking evolution, as documented in my tweets, demonstrates a journey from a technically focused programmer to a more well-rounded software development professional with a strong grasp of Agile, DevOps, and Lean principles. I am passionate about continuous learning, striving for excellence in my craft, and ultimately, delivering valuable and impactful software.



Monday, December 30, 2024

Conversaciones sobre Desarrollo, Producto e Impacto

Quiero compartir contigo una breve conversación en formato de preguntas y respuestas, donde repaso mi trayectoria profesional y reflexiono sobre la importancia de reducir el Costo Basal del software. Estas notas —preparadas antes de grabar el podcast "Tenemos que hablar de producto"— sintetizan mi visión sobre la industria y cómo intento generar un impacto positivo tanto en el negocio como en la vida de los usuarios. Espero que encuentres algo valioso en estas reflexiones, ya seas un desarrollador experimentado o alguien que está comenzando su camino con enfoque en producto.

1. Trayectoria Profesional

P: ¿Cómo iniciaste tu carrera en tecnología y qué te atrajo inicialmente del desarrollo de software?

R: Mis primeros pasos fueron a mediados de los 80 con una computadora ZX Spectrum. Desde el inicio me fascinó la idea de poder crear cosas a través de un lenguaje de programación. Decidí dedicarme a ello porque soy muy curioso y, creo, también bastante creativo; para mí, los ordenadores eran pura magia. Con el tiempo, tomé decisiones profesionales enfocadas en aprender y comprender cómo funcionan realmente las cosas: Linux, programación orientada a objetos, startups, la nube (cloud), escalabilidad técnica y organizacional, entre otros temas.

P: ¿Cuál dirías que ha sido la lección más importante que aprendiste al enfrentar desafíos en tu carrera?

R: Que lo más difícil siempre son las personas. Además, he aprendido que el mayor desperdicio en el desarrollo de software es hacer lo que no se necesita. Y, lo que es peor, no atreverse a eliminarlo después por miedo o por inercia. Esto impacta negativamente tanto en la calidad del producto como en el entorno de trabajo.


2. El Rol del Developer en Empresas de Producto

P: ¿Cuál es, en tu opinión, el rol esencial de un desarrollador en una empresa de producto?

R: Creo que el rol esencial de un desarrollador es resolver problemas o aportar valor al usuario de manera que también beneficie al negocio. Esto implica hacerlo con la menor cantidad de software y esfuerzos posibles, trabajando en pequeños incrementos y buscando retroalimentación constante. Idealmente, el desarrollador también se involucra en la identificación de problemas y oportunidades.

P: ¿Qué habilidades necesita un developer, además de las técnicas, para realmente aportar valor en un entorno de producto?

R: Las que solemos llamar *soft skills* son cruciales. De forma muy resumida:  

  • Colaboración: saber comunicarse, empatizar y comprender tanto a los clientes como a los colegas.
  • Aprendizaje continuo: entender el negocio, proponer mejores soluciones y adaptarse a nuevos equipos y tecnologías.

P: ¿Cómo ves la importancia de prácticas como DevOps y Continuous Delivery en la creación de productos escalables y sostenibles?

R: Son esenciales para trabajar con incrementos pequeños y lograr una retroalimentación constante. El propósito central de DevOps y la Programación Extrema (XP) es hacer que la Entrega Continua (Continuous Delivery) sea eficiente y viable. Esto nos permite experimentar, validar ideas y adaptarnos rápidamente, siguiendo los principios de Lean Software Development y Lean Startup, popularizados por personas como Mary Poppendieck o Eric Ries.

3. El Costo Basal del Software

P: ¿Podrías explicar brevemente el concepto de "Costo Basal" del software y cómo afecta la capacidad de un equipo para innovar?

R: El Costo Basal del software es el costo continuo que genera una pieza de software simplemente por existir. Esto incluye el mantenimiento, la complejidad añadida al sistema y la carga cognitiva para el equipo. Muchas personas lo comparan con construir un edificio que luego permanece inmutable, pero el software es más como un jardín: crece, cambia y necesita cuidado constante. Mantener funcionalidad irrelevante se convierte en un lastre, limitando la capacidad del equipo para innovar.

P: ¿Qué prácticas clave recomiendas para minimizar el costo basal en un proyecto de software a largo plazo?

R:

  • Aplicar principios de Lean Software Development y Lean Product Development, enfocándose en el impacto máximo con la mínima solución posible (menos código y esfuerzo).
  • Adoptar prácticas técnicas de Extreme Programming (XP), como Outside-in TDD, para escribir solo el código necesario y garantizar alta calidad frente a futuros cambios.

P: ¿Cómo crees que el costo basal influye en las decisiones sobre mantenimiento o eliminación de características?

R: Debería tener un impacto significativo, pero muchas veces se pasa por alto. Las empresas suelen evitar eliminar funcionalidades antiguas por miedo o inercia, incluso cuando ya no aportan valor. Un enfoque de producto consciente evalúa periódicamente cada funcionalidad para justificar su existencia. Si algo no genera retorno ni aprendizaje, es mejor retirarlo para reducir la carga sobre el equipo y el sistema.

4. Consideraciones para Negocio (CEO y Producto) al Comunicarse con Tecnología

P: ¿Qué es lo más importante que debería entender un CEO o líder de producto sobre el desarrollo de software para comunicarse mejor con los equipos técnicos?

R:

  1. Entender el valor de forma integral: no solo se trata de aumentar el retorno, sino de protegerlo y evitar costos innecesarios.
  2. Concebir el software como algo vivo, que evoluciona continuamente, no como un edificio que se construye y queda estático.
  3. Reconocer el Costo Basal del software y cómo gestionarlo estratégicamente.
  4. Valorar al equipo técnico como un aliado clave en las decisiones de producto, especialmente cuando los desarrolladores adoptan una mentalidad de Product Engineers.

P: ¿Qué métricas o indicadores recomendarías revisar en conjunto entre negocio y tecnología para asegurar un desarrollo sostenible y alineado a los objetivos?

R:

  • Métricas Lean como Lead time y Cycle time, además de la cantidad de retrabajo (esfuerzo extra necesario por errores, problemas o decisiones inadecuadas).
  • Tiempo desde la generación de una idea hasta obtener el primer feedback real.
  • Métricas de negocio comprensibles y accesibles para el equipo técnico.
  • Métricas DORA para evaluar la salud del proceso de ingeniería: frecuencia de despliegues, tiempo de recuperación ante fallos, tasa de cambios fallidos, etc.


Conclusiones y Consejo Final

Para líderes de negocio: mi recomendación es que aprendan Lean (Lean Startup, Lean Product Development y Lean Software Development) y adopten sus principios. Esto les permitirá ser más eficientes y sostenibles al buscar valor y gestionar equipos.

Para desarrolladores: recordar siempre que la tecnología es un medio, no un fin. El foco debe estar en el impacto que generamos para el negocio, pero de una manera sostenible a largo plazo: Build the right thing, and build the thing right.


Por último, para todos: la parte difícil siempre son las personas y la colaboración. Invirtamos en mejorar esto, porque es lo que realmente hace la diferencia. ¡Hagamos que cuente!


Wednesday, December 25, 2024

Cloud Mindset: A Different Way of Thinking (tech Pill)

When building software systems in the cloud, we must adopt a different perspective compared to architecting for on-premises data centers or legacy VPS environments. In traditional setups, capacity is fixed, requiring months of lead time to scale, which makes resources both expensive and scarce. The cloud, however, flips these limitations on their head by offering abundant, rapidly provisioned resources—reshaping how we think about infrastructure and application design.

Traditional Infrastructure: Limitations of the Past

  • Fixed capacity: Scaling up in on-premises environments or VPS setups can take months because it involves purchasing and installing new hardware.
  • Scarce resources: Businesses often invest in the bare minimum of hardware to minimize costs, leaving little room for flexibility.
  • High costs: Upfront hardware purchases and ongoing maintenance are expensive, with costs that must be amortized over time.


The Cloud Paradigm: A New Frontier

In the cloud, servers, storage, and databases feel virtually limitless. These resources can be spun up or down in minutes, allowing teams to adapt quickly to changing needs. This flexibility is both cost-effective and efficient. However, to fully leverage these benefits, we need to shift both our mindset and engineering practices.

Key Principles of the Cloud Mindset

1. Treat Resources as Disposable (Owning vs. Renting a Fleet of Cars)

In traditional IT environments, servers are treated like personally owned cars—carefully maintained, upgraded over time, and expected to last for years. In the cloud, the mindset shifts: servers resemble a fleet of rental cars—standardized, easy to replace, and requiring minimal upkeep. This approach highlights the importance of automation and uniform configurations. When a server or infrastructure component fails, it shouldn’t be manually repaired. Instead, it should be automatically replaced.

Recommended Reading: The History of Pets vs. Cattle and How to Use the Analogy Properly (In cloud architecture, servers are treated as commodities, often explained with the “cattle, not pets” analogy.)

2. Design for Failure

Failures are inevitable in cloud platforms, which run on commodity hardware distributed across multiple data centers and regions. Instead of trying to prevent every failure, embrace them by designing for resilience. Use redundancies, fault tolerance, and graceful degradation to ensure your application continues to operate when something breaks.

Key takeaway: Assume failure will happen and architect your system to recover quickly and minimize impact.


3. Define Infrastructure as Code (IaC)

Infrastructure as Code (IaC) tools, like Terraform or AWS CloudFormation, let you define and version-control your infrastructure. This approach makes provisioning fast, consistent, and repeatable. With IaC, you can test, review, and iterate on infrastructure changes the same way you do with application code.

Learn More: Immutable Infrastructure (Tech Pill)

4. Take Advantage of Cloud Elasticity

Elastic scalability is one of the cloud’s biggest advantages. Instead of over-provisioning for occasional traffic spikes, you can scale up during peak loads and scale down when demand decreases. To do this effectively, design your applications for horizontal scaling—adding more instances rather than making existing ones bigger.

5. Pay Per Use: Rent, Don’t Buy

The cloud’s on-demand pricing model means you only pay for what you use. This flexibility allows you to scale resources up or down based on demand, helping you adapt quickly to changing usage patterns. By spinning up resources during heavy loads and deprovisioning them when idle, you keep costs under control without compromising capacity.



The Bigger Picture

Understanding cloud services, APIs, and GUIs is just the tip of the iceberg when it comes to cloud adoption. The true transformation lies in embracing a fundamental shift in engineering culture and design philosophy. It’s about accepting new assumptions that define how we build and operate in the cloud:

  • Resources are limitless: Stop hoarding and start focusing on how to use resources effectively.
  • Failure is inevitable: Design for resilience from the outset instead of trying to avoid every possible failure.
  • Speed matters: Leverage automation, scripting, and repeatable processes to enable rapid experimentation and iteration.

A New Engineering Challenge

“Construct a highly agile and highly available service from ephemeral and assumed broken components.” Adrian Cockcroft

This challenge captures the essence of building in the cloud. Unlike traditional data centers, the cloud requires us to design for an environment where components are temporary and failure is expected. Adopting a true “cloud mindset” means rethinking old habits and fully leveraging the cloud’s unique characteristics to deliver robust, scalable, and cost-effective solutions.

Key Takeaways

In summary, building for the cloud means embracing four key principles:

  • Embrace disposability: Treat infrastructure as temporary and replaceable.
  • Design for failure: Build resilience into your system instead of trying to prevent every failure.
  • Automate everything: Use tools and processes that allow for speed and consistency.
  • Pay only for what you use: Take advantage of the cloud’s cost-efficiency by scaling dynamically.

By adopting these principles, you’ll create services that are highly available, scalable, and agile—perfectly aligned with the demands of modern business.

Related:

Thursday, December 19, 2024

Lean Software Development: Decide as Late as Possible

Translated from the original article in Spanish https://www.eferro.net/2024/06/lean-software-development-decidir-lo.html

Lean Software Development starts from the premise that software and digital product development is a constant learning exercise (see Amplify Learning). With this premise, it is clear that the more information we have before making a decision, the better its quality will be. Therefore, deciding as late as possible allows us to be more effective. At the same time, there is a cost (or risk) associated with not making a decision, which increases over time. For this reason, we should aim for the "last responsible moment", the optimal point for making a decision with the most information possible, without the cost of delay outweighing the potential benefit of obtaining more information by postponing it.


Advantages of Delaying Decisions and Keeping Options Open

Postponing decisions is a fundamental strategy in lean software development. Although it is not always easy and requires practice, this tactic enables the creation of sustainable and easy-to-evolve products. One of the main advantages of making decisions later is that it provides a better understanding of both the business and technology, which in turn facilitates more informed and accurate decision-making.

Additionally, delaying decisions leads to simpler and smaller solutions, reducing the effort required to implement them and avoiding unnecessary overhead. By keeping our options open, we focus only on what provides real value in the present, avoiding over-engineering and unnecessary work. This flexibility allows us to react quickly and effectively to any change without fear, which is crucial in the dynamic environment of software development.

Some specific advantages of delaying decisions include:

  • Less work and waste: Implementing only what is necessary at the moment reduces total work and waste.
  • Reduced effort in rework: If changes are needed, less effort is required because over-engineering has been avoided.
  • Greater flexibility and adaptability: Keeping options open enables us to adapt quickly to new requirements or changes in the environment.

A well-designed architecture allows delaying important decisions without compromising the product's quality or flexibility. This not only enables us to make better-informed decisions but also facilitates the creation of good architecture, which in turn allows the postponement of other critical decisions in the future. In short, this strategy allows us to move forward with less burden and greater agility, enhancing our ability to deliver continuous value.

Decision-Making

My teams are empowered, end-to-end teams that have full authority over how to solve problems and, in many cases, even over which problems to solve. In such teams, numerous decisions of all kinds are made daily—decisions about the problem, potential solutions, implementation of those solutions, and the prioritization of where to invest (e.g., solving an issue, reducing uncertainty about a technical or product decision, determining next increments, etc.).

For example, when starting a new product or feature, the level of uncertainty is usually very high. In these cases, we prioritize reducing that uncertainty by breaking down critical decisions and assumptions into smaller parts and investing in product increments to obtain feedback, thereby reducing the uncertainty.

As we receive feedback, we learn more and adjust our assumptions/hypotheses accordingly. This iterative process helps us move from great uncertainty to greater clarity and confidence.

By continuously validating assumptions, we reduce risks and make more informed decisions.



These decisions occur continuously at all levels. It is part of the nature of our profession.

If we had to classify the types of decisions to be made, we could do so as follows:

  • Decisions about the problem and opportunity to explore
    • What: Where we should invest and why.
    • How: What solution or implementation strategy we believe is appropriate for the selected case.
  • Decisions about technology and implementation
    • At the architecture and technologies level.
    • At the design level of the solution at various levels.

It is important to note that decisions are never independent; they must always consider the context and knowledge of the team, as well as the existing system or product. It is as if we see problems and opportunities as the difference in knowledge or implementation between what we know or have and what we want to achieve.

In other words, it is always about making small increments in the direction and with the objective we need. Sometimes, it is about delivering value to the user; other times, it is about reducing the uncertainty of a decision we need to make. Always, it is an increment in the software (product/system) or in the knowledge the team has.



Breaking Down Decisions to Address Them Individually

One of the main ways to increase our chances of postponing decisions is to break down large decisions into small decisions and clearly separate those that are easily reversible from those that are irreversible (or very difficult to reverse).

Therefore, working to decide as late as possible involves doing so in very small steps or increments, which is fully aligned with the Lean mindset (working in small batches).

This process of breaking down is a continuous practice that allows the team to address everything from where to invest in the next quarter to what to do in the next few hours. We should see it as a systematic way of thinking that is recurrently applied at all levels.

Conclusions

Deciding as late as possible is a key strategy in Lean Software Development that maximizes the quality of decisions by leveraging the most information available. This practice contributes to creating more sustainable, flexible, and adaptable products.

The main advantages are:

  • Reduction of waste: Focusing only on what is necessary minimizes redundant work and avoids over-engineering.
  • Greater flexibility: Keeping options open allows for rapid adaptation to environmental or requirement changes.
  • More informed decisions: Postponing decisions until the last responsible moment results in more accurate and effective decisions.
  • Increased adaptability: Facilitates the implementation of simple, small solutions, reducing the effort needed for changes and improvements.

It is essential that empowered teams adopt this mindset and become accustomed to breaking decisions into manageable steps. By doing so, they can incrementally reduce uncertainty, continuously validate their assumptions, and adjust strategies based on received feedback. This iterative approach reduces risks and strengthens the team's ability to deliver continuous value.

In upcoming articles, we will explore strategies and concrete examples of how to effectively postpone decisions, providing practical tools to implement this philosophy in your software development projects.

Related Resources

Sunday, December 15, 2024

Good talks/podcasts (Dec 2024 I)

 


These are the best podcasts/talks I've seen/listened to recently:
  • Adam Ralph - Finding your service boundaries — a practical guide - SCBCN 24 (Adam Ralph) [Architecture, Architecture patterns, Microservices] [Duration: 00:48] (⭐⭐⭐⭐⭐) This presentation is about identifying service boundaries in software architecture to avoid coupling and ending up with a "big ball of mud", even when using microservices. I recommend this talk because it provides practical advice on how to define services as technical authorities for specific business capabilities, leading to more maintainable and scalable systems.
  • AWS re:Invent 2024 - Dr. Werner Vogels Keynote (Werner Vogels) [Architecture, Engineering Culture, simplicity] [Duration: 01:50] This presentation explores the concept of "simplexity" - building and operating complex systems safely and simply, using lessons learned from 20 years of evolution at Amazon Web Services (AWS). The speaker emphasizes the importance of designing evolvable systems from the beginning and outlines six key lessons for managing complexity, including breaking down systems into smaller units, aligning organizations to architecture, and automating tasks that don't require human judgment. Numerous examples from AWS, such as the evolution of Amazon S3, CloudWatch, and Route 53, illustrate the practical application of these principles.
  • How to Deliver Quality Software Against All Odds GOTO 2024 (Dan North) [Agile, Continuous Delivery, Engineering Culture, XP] [Duration: 00:52] (⭐⭐⭐⭐⭐) This podcast features Daniel Terhorst-North, a prominent figure in the software development world, reflecting on 20 years of industry changes and sharing his insights on topics ranging from Agile and DevOps to product management and organizational flow. Drawing on his experiences at Thoughtworks and beyond, Terhorst-North highlights the importance of connecting business needs with technical implementation and emphasizes the value of building evolvable systems with "simplexity" in mind.
  • Microservices Retrospective – What We Learned (and Didn’t Learn) from Netflix (Adrian Cockcroft) [Architecture, Architecture patterns, Cloud, Microservices] [Duration: 00:55] This presentation offers a retrospective analysis of the speaker's experience implementing microservices at Netflix from 2007-2013, examining both the successes and the lessons learned along the way. The speaker discusses key aspects of Netflix's innovative approach, including their "extreme and agile" culture, early adoption of cloud technologies like AWS and Cassandra, and focus on developer freedom and responsibility. The presentation also highlights specific technical patterns and practices developed at Netflix, such as the use of service access libraries, lightweight serializable objects, and chaos engineering.
  • Team Topologies, Software Architecture & Complexity • James Lewis • GOTO 2022 (James Lewis) [Architecture, Engineering Culture, Microservices, team topologies] [Duration: 00:38] This presentation explores the intersection of team topologies, software architecture, and complexity science, arguing that successful organizational design and software development hinges on optimizing for flow and value delivery. The speaker, drawing on his experience with the evolution of microservices, advocates for embracing decentralization, limiting hierarchy, and leveraging social network structures to foster innovation and agility in growing organizations.
Reminder: All of these talks are interesting, even just listening to them.

Related:

Thursday, December 12, 2024

Lean Software Development: Amplify Learning

Translated from the original article in Spanish https://www.eferro.net/2024/05/amplificar-el-aprendizaje.html

In this new article in the series on Lean Software Development, we will focus on the nature of software development and the importance of optimizing our process for continuous learning.

The Nature of Software Development

Regardless of the classic mistake we often make in the industry by comparing software development to building construction or manufacturing processes (see Construction engineering is NOT a good metaphor for software), the truth is that developing a software product is a process of adapting the solution to the user’s changing needs (or to our hypotheses about those needs). That is, it is the process of continuously evolving the software to adapt to those needs.

This evolution is continuous not only because our client’s needs evolve (strategy, business rules, processes) but also because the environment in which we operate changes constantly (SaaS, competitors, AI, evolution of devices). This evolution is part of the intrinsic nature of software since its great advantage is precisely the speed with which it can evolve and adapt. Software that is not evolving is dead software (either because no one is using it or because it has become obsolete).

design by http://www.freepik.com/

Unlike production and manufacturing processes, where quality is conformity to established requirements, quality in software is meeting the needs of our clients. These needs change continuously, so the software must change in turn.

Therefore, we can see that at the heart of the nature of software development lies continuous and profound learning about our clients' (changing) problem or need and about the suitability of our solution to solve that problem or need.

Lean Software Development recognizes this nature of software development and considers it necessary to amplify learning to optimize the product development process.

"A mature organization focuses on learning effectively and empowers the people who do the work to make decisions." Mary Poppendieck

Amplify Learning

Learning is the cornerstone of effective software development. In the context of Lean Software Development, this principle is elevated as a fundamental guide for continuous improvement. Recognizing that knowledge is dynamic and that learning is an ongoing process is crucial for progress in an agile development environment.

However, this learning cannot be limited to specific people or roles; it must extend to the entire team, as we want the whole team to contribute, and we have already seen that learning is part of the nature of software development.

Amplifying learning involves not only understanding what the client wants but also discerning what they do not need. This discernment is critical, as building unnecessary features or functionalities is the biggest waste in software development. Therefore, the learning process must focus on the constant clarification of the client's real needs, avoiding waste and optimizing value delivery.

"The biggest cause of failure in software-intensive systems is not technical failure; it’s building the wrong thing." Mary Poppendieck

In summary, Lean Software Development recommends:

  • Recognizing continuous and constant learning as the fundamental bottleneck in software product development.
  • Learning from the client’s needs and problems, also identifying what they do not need.
  • Optimizing continuous learning by the entire team.

Lean Software Development suggests the following tools to enhance learning:

  • Feedback loops
  • Iterations
  • Synchronization and integration
  • Set-Based Development

Grounding the Amplification of Learning

In my 15 years of experience working with various teams, I have always considered learning a fundamental part of the nature of software development. My approach has been to foster the continuous and amplified learning promoted by Lean Software Development.

Below, I outline the tools I have used to amplify learning:

Empowered Product Teams

These teams, based on business needs and strategies, have the ability to decide which problem to solve and how to solve it. They are teams with a true product mindset, composed of Product Engineers who are not only interested in the client’s problem or need but also seek to understand it deeply and propose the best solutions.

As John Cutler aptly describes, these are known as Product Teams.

https://amplitude.com/blog/journey-to-product-teams-infographic

These Product Teams are responsible for understanding and learning about the client’s problems, using that learning to propose the best solutions. In these teams, learning is key, and they employ product discovery practices. In our specific case, team members take turns conducting user interviews, facilitating co-creation sessions, or providing support. All these sessions provide us with insights that are shared with the rest of the team, enabling us to make decisions about the next steps.

Although I am aware of more advanced techniques than those I’ve mentioned for conducting product discovery, I’ve never put them into practice. We’ve been able to make a significant impact and achieve valuable insights without using sophisticated product discovery practices, thanks to the types of products we’ve been involved with (less visual products, sometimes internal, or aimed at technical profiles).

Feedback Loops

We use Extreme Programming (XP) as a foundation, focusing on creating the smallest and most frequent feedback loops to optimize the development process:

  • Constant communication: At the mob or pair level (seconds).
  • Test-Driven Development (TDD): Short test and development cycles (minutes).
  • Acceptance tests: Rapid evaluation of functionalities (minutes).
  • Frequent deployments: Regular implementation of improvements (hours).
  • Daily planning: Reviewing and adjusting daily objectives (1 day).
http://www.extremeprogramming.org/introduction.html

Additionally, we complement these XP feedback loops with Continuous Deployment (CD) techniques, enhancing our ability to integrate and validate changes almost instantly.

"Extreme Programming is a team style of software development... enabling that by increasing feedback loops at every possible level... so if you get great feedback then you don't have to make good decisions because you can afford to just make a decision and you'll find out." Kent Beck

These feedback loops also occur through client feedback, where we can assess whether our hypotheses about client needs and the proposed solution are achieving the desired impact.

Iterations

Although we have moved away from the traditional approach of fixed iterations, we continue optimizing continuous delivery flow. We frequently iterate on different parts of the product, always focusing on the most critical areas at any given time. For us, no feature or functionality is ever definitively complete; they are constantly evolving, and it is common to revisit previously developed elements as needed. (See https://productdeveloper.net/iterative-incremental-development/ )

Synchronization and Integration

We use Continuous Integration and Continuous Delivery practices with a Trunk-Based Development approach. This ensures that the entire team maintains a complete and integrated view of the product daily, avoiding isolated code branches that persist for days or weeks. There is only one active version on the main branch, which minimizes waste, prevents conflicts, and ensures a shared vision.

This way of working allows the entire team to share the same vision of the code (the solution) at all times and prevents an individual or part of the team from isolating themselves with a different vision (and knowledge) for days while working on a functionality branch.

Set-Based Development Design

In Lean Software Development, the term Set-Based Development designates a methodology that prioritizes keeping multiple design options open throughout the development process. This approach enables the collection of as much information as possible, not only about a specific design element but also about how all elements integrate. Contrary to the tradition of making early design decisions based on the information available at that moment, set-based development favors deferring design decisions until more complete information is available, which is crucial for the development of high-quality, flexible software.

This method is based on the reality that, in software development, interactions between different components cannot be predicted with complete certainty until they are implemented and tested. Therefore, keeping design options open and avoiding definitive decisions until more data is available results in a more effective approach for managing complex, high-quality software projects.

In my teams, the practice of keeping design options open and postponing decisions until the maximum information is obtained is an obsession. I have even created a specific workshop to practice this methodology (see Lean Workshop: Postponing Decisions). The key to working with open options lies in emphasizing simplicity (XP), fostering evolutionary design, and proceeding in very small steps that allow us to make decisions at the last responsible moment. I will delve further into this topic in a full article dedicated to it in the series.

“Simplicity, or the art of maximizing the amount of work not done, is essential.”  Principle from the Agile Manifesto for Software Development

Other Useful Techniques to Amplify Learning

In our day-to-day work, we apply strategies like mob programming or pair programming with frequent pairing rotations. This facilitates the rapid dissemination of knowledge within the team and prevents the formation of information silos.

We also regularly use Extreme Programming Spikes, which are timeboxed tasks dedicated exclusively to exploring and learning about specific aspects we need to master, such as a new technique, library, or technology.

Another technique that has always worked for me to improve and amplify learning is introducing blameless postmortems, applied both to production incidents and to retrospective reviews of initiatives. 

Conclusions

In summary, our approach prioritizes learning as a fundamental element, working with frequent feedback cycles, and making decisions based on what we continuously learn. This approach helps us adapt quickly and constantly improve our effectiveness and efficiency in product development.


References and Related Links

Sunday, December 08, 2024

Using Blameless Incident Management to Change Team Culture

I've always worked at product companies, creating or scaling teams. In these product companies, we work remotely, at least partially. In my experience, introducing Agile culture to a technical team means introducing DevOps and Agile software development. However, I see Agile culture as more than just tools and processes; it is a culture of collaboration, continuous improvement, continuous learning, a focus on technical excellence, and transparency. Let's explore how we can manage incidents in a way that aligns with this Agile culture.

Production Incidents

We refer to "production incidents" as anything affecting our clients or that we suspect might affect them. These incidents can include things like machine failures, unexpected metrics, or a client reporting an error.

High-Performing Teams

Let's take a quick look at what makes high-performing teams so effective. Google's research on high-performing teams shows that individual talent is not the most important factor. Instead, the key to high performance is the quality of the interactions within a team. The most important factor for high-performing teams is psychological safety. Team members need to feel safe taking risks without fear of ridicule or failure. This psychological safety is essential for fostering a culture of learning and improvement.


Traditional Incident Management vs. Agile Incident Management and Psychological Safety

Unfortunately, the traditional approach to incident management often lacks this crucial element of psychological safety. Incident management often falls solely on the operations team, creating a siloed and stressful environment. Under pressure, teams may resort to blame and scapegoating, leading to a culture of fear and hiding problems. This approach is not conducive to learning and improvement and can ultimately lead to recurring issues. Instead of resorting to blame, we can adopt an Agile approach to incident management, focusing on collaboration, learning, and continuous improvement. This approach reduces fear, avoids a hero culture, and encourages transparency.

In my experience at TheMotion, Nextail, and ClarityAI introducing blameless incident management practices has served as a lever to shift the team's culture towards one of continuous learning. It has helped us overcome the fear of making problems visible, fostered collaboration, and empowered us to address issues at their root causes. This resonates with one of the core principles of Agile incident management: “Hard on systems. Soft on people." We prioritize understanding how the system contributed to the error, rather than pointing fingers at individuals. This creates a safer space for open communication and learning.

The impact of this cultural shift at TheMotion was so significant that team members who moved to other companies have begun implementing these ideas in their new teams.

Here's how our process works:

  • Stay Calm and Don't Panic: When an incident occurs, it's important to stay calm and avoid panicking. Our process is designed to help us avoid this. When we interview developer candidates, we ask them about a time when they made a mistake in production. We evaluate not only their technical skills but also their ability to remain calm under pressure and avoid panicking. This helps ensure that our team can handle incidents effectively without succumbing to fear or stress.
  • Assign an Incident Commander: We automatically assign an incident commander to take charge of the situation. The incident commander's responsibilities include:
    • Creating a "War Room" for collaboration.
    • Creating a blameless incident report to document the incident and focus on learning.
    • Notifying the appropriate stakeholders about the incident.
    • Recruiting and coordinating a team to resolve the incident.
  • Focus on Service Recovery: The team's primary goal is to recover the service as quickly as possible. This might involve implementing a temporary fix, disabling a functionality, or communicating with clients. The key is to stabilize the system and minimize the impact on users.
  • Investigate the Root Cause: Once the service is restored, the team investigates the root cause of the incident. This investigation follows a process of:
    • Hypothesis generation.
    • Validation.
    • Documentation.
    • Repetition.
  • Define Corrective and Preventive Actions: Based on the investigation, the team defines corrective and preventive actions to reduce the mean time to recovery (MTTR) and the blast radius of future incidents. These actions aim to improve the system's resilience and prevent similar incidents from happening again. We prioritize tasks related to improving the system and resolving recurring issues. We ensure these corrective actions are integrated into our workflow and addressed with high priority to maintain team motivation and demonstrate that system improvement is a top priority for the entire company.
  • Integrate Actions into the Workflow: The corrective and preventive actions are then prioritized and integrated into our normal workflow, ensuring they are addressed promptly.

The Importance of Blameless Incident Reports

Throughout the entire process, we maintain a blameless approach, emphasizing learning and improvement over assigning blame. We use blameless incident reports, which are:

  • Collaborative: The incident commander creates a shared Google Doc where everyone involved can contribute in real-time.
  • Transparent: We make incident reports public to the entire company as soon as we start detecting an issue. This transparency fosters trust and allows anyone to stay informed about the incident's progress.
  • Detailed: Our incident report template includes a summary of the incident, a timeline, root causes, corrective and preventive actions, and lessons learned.

Facilitating Change

To successfully introduce this approach, it's essential to:

  • Focus on Systems and Habits: Instead of blaming individuals, we concentrate on improving our systems, processes, and habits to prevent future incidents.
  • Lead by Example: By actively participating in the process and demonstrating a blameless approach, we can encourage others to adopt this mindset.
  • Show Vulnerability: Leaders should be willing to admit their mistakes (I have a few 😅) and share their experiences, creating a safe space for others to do the same.
  • Prioritize Improvement: It's crucial to ensure corrective and preventive actions are prioritized and not overshadowed by other business priorities.
  • Reinforce Learnings: We should highlight key learnings from incident reports and share them with the team to promote continuous learning.

Benefits of Agile Incident Management

Embracing Agile incident management can lead to numerous benefits, including:

  • Increased Trust: Transparency and collaboration build trust among team members and between the team and the rest of the company.
  • Enhanced Psychological Safety: A blameless approach creates a psychologically safe environment where people feel comfortable taking risks and learning from mistakes.
  • Improved Resilience: By systematically addressing incidents, we can continually improve our systems and make them more resilient.
  • Focus on Continuous Improvement: Incident management becomes an integral part of our continuous improvement process, leading to a more robust and reliable system.
  • Greater Transparency: Open communication about incidents and their resolution fosters a culture of transparency and accountability.
  • Enhanced Professionalism: Our commitment to learning and improvement demonstrates professionalism to our clients and stakeholders.

Conclusion

By adopting an Agile approach to incident management, we can transform our team's culture and create a more resilient and reliable system. By focusing on collaboration, learning, and continuous improvement, we can turn incidents into valuable opportunities for growth and development. Remember, incidents are inevitable, but how we respond to them is what truly matters. Let's embrace a culture of learning and create a system that can withstand the inevitable challenges of production.

Notes


Friday, December 06, 2024

Focus on Verbs, Not Names: A Strategy for Better System Design

In my experience, the key to deeply understanding a system or product lies in focusing on behaviors—the actions, flows, and events that drive its operation. Prioritize identifying verbs over names. Here’s why this approach works and how it can transform your design process.

Start with Behaviors

Shift your analysis from “What is this thing?” to “What does it do?”. Focus your research and conversations on:

  • Identifying actions and business flows.
  • Understanding dependencies and concurrency.
    • What depends on what?
    • Which actions can happen in parallel?
    • What triggers or informs each behavior?

When you analyze behaviors, you uncover the dynamic interactions users have with your system. This focus naturally aligns with designing systems as collections of small, independent pieces that encapsulate state and communicate through messages—perfect for paradigms like OOP, Actors, and Microservices.

The Pitfall of Focusing on Names

Many teams fall into the trap of identifying entities (names) first, which often results in anemic, static models disconnected from real-world dynamics. This approach, while intuitive, neglects the rich context of flows, dependencies, and rules. Misunderstanding Object-Oriented Analysis (OOA) in this way often leads to systems that lack expressiveness and scalability.

Behaviors as the Foundation of Value

Remember, customers don’t value software for its own sake. Software is a liability, not an asset. The true asset lies in the actions and outcomes your system enables. Identifying behaviors first ensures your design delivers meaningful value to the customer by focusing on what they actually need: actions, not abstractions.

Scalability Through Behaviors

Focusing on behaviors reveals the concurrent nature of the real world. Systems that prioritize names struggle to address concurrency, parallelism, and scalability. By contrast, analyzing actions and flows allows you to design systems that are naturally reactive and distributed. At higher levels, this approach helps define bounded contexts, domain events, and microservices. At lower levels, it aids in designing concurrent and scalable services.

Event Storming: A Behavioral Lens

Techniques like Event Storming are powerful tools for identifying domain events, dependencies, and key behaviors. They bring the behavioral focus to life, helping teams collaboratively uncover what drives their system and how it should respond.



Conclusion

Identifying entities (names) has its place, but behaviors (verbs) are more critical. A system’s essence lies in what it does, not what it’s called. Adopting a behavior-first approach ensures you design systems that are adaptive, scalable, and valuable to customers.

In the end, this mindset reflects a simple truth: development is always iterative. By continuously refining our understanding of behaviors, we build systems that evolve gracefully with the needs of their users.

Notes: