Translated from the original article in Spanish Desarrollando software: posponiendo decisiones y trabajando en pasos pequeños
In this article of the series on Lean Software Development, after exploring practices for postponing decisions in the product, today we will discuss how to develop software by taking very small steps, delaying decisions, and doing only what is necessary at each moment.
This approach aligns with the principles of Lean Software Development and eXtreme Programming (XP), being a key part of agile development.
Why Work in Small Steps
Working in small steps is essential in uncertain environments. Neither we nor the client always know exactly what is needed to achieve the desired impact. By progressing in small increments, we obtain valuable feedback both from the system—on its functionality and behavior—and from the client. This approach allows us to learn and constantly adjust, avoiding premature decisions that could limit our options or be difficult to reverse.
It is a continuous learning process where we avoid speculative design and unnecessary features. By moving forward step by step, we accept that we do not know everything from the outset and choose to experiment and validate constantly.
Benefits of Working in Small Steps
Working in small steps with continuous feedback offers numerous benefits. Geepaw Hill, in his article "MMMSS: The Intrinsic Benefit of Steps," brilliantly describes the effects of this practice on teams. Below is a summary, though I recommend reading the full article or the series "Many More Much Smaller Steps."
Geepaw mentions eight benefits of working in steps of less than a couple of hours:
Benefits in Responsiveness:
- Interruptibility You can handle interruptions or change focus without breaking the workflow.
- Steerability After each small step, you can reflect, incorporate feedback, and adjust the direction if necessary.
- Reversibility If a step does not meet expectations, reverting it results in minimal time loss.
- Target Parallelism By advancing in consistent small steps, it is possible to work on different areas of the system or for different stakeholders without leaving tasks half-done.
Human Benefits:
- Cognitive Load: Forces you to reduce cognitive load by limiting the combinations and cases you must consider.
- Pace: Establishes a steady team rhythm with cycles of quick rewards (successful tests, commits, deployments, etc.).
- Safety: Small changes carry less risk than large ones. With frequent tests and daily deployments, the maximum risk is reverting the last change.
- Autonomy: Allows the team to make continuous decisions, requiring constant effort to understand and empathize with the user to address problems or implement improvements.
Working in Small Steps and Postponing Decisions
Since around 2009-2010, I have tried to apply the practice of working in very small steps in all the teams I collaborate with. These steps usually take a few hours, allowing production deployments several times a day and achieving visible changes for the client in one or two days at most. This agile approach minimizes risk and maximizes responsiveness, but it requires discipline and the rigorous application of agile development practices proposed by eXtreme Programming (XP).
Practices and Tactics for Working in Small Steps
Below, I present some practices and strategies that enable us to work this way. Sometimes it’s hard to separate them, as they are closely interrelated and complement each other.
Iterative and Incremental Development
The most important technique we use is also the simplest and, at the same time, the least common. Instead of starting with a complete solution and dividing it into steps for implementation, we progressively grow the solution until it is “good enough,” allowing us to move on and invest in solving another problem. That is, we focus on delivering increments (to the end client) that align with the idea of the solution we are aiming for, all while keeping the solution and the problem in mind. We use feedback to ensure that we are heading in the right direction. Additionally, not being afraid to iterate based on this feedback allows us to work in small, low-risk steps.

For example, starting from an initial problem with a potential solution, we generate increments (Inc 1, Inc 2, etc.) in less than a day. Each increment is delivered to the user for feedback, which helps us decide the next step and whether the solution is already good enough. This way, we avoid waste (gray area) by not doing unnecessary tasks, thus reducing the system's Basal Cost.
https://x.com/tottinge/status/1836737913842737382
Vertical Slicing
Vertical slicing involves dividing functionalities and solutions in a way that allows for an incremental approach to development, where each small increment provides value in itself. This value can manifest as user improvements, team learning, reduced uncertainty, among others. Instead of splitting stories by technical layers (infrastructure, backend, frontend), we divide them into increments that deliver value and typically require work across all layers.
In my teams, we apply this vertical slicing rigorously, ensuring that no increment takes more than two days, and preferably less than one day. We use various heuristics and processes for vertical slicing (https://www.humanizingwork.com/the-humanizing-work-guide-to-splitting-user-stories/), such as the “hamburger method” by Gojko Adzic, which I will describe later.
Even though we use this vertical slicing to break down what we want to implement into increments, this doesn’t mean we always implement all the identified increments. On the contrary, the goal is always to grow the solution as little as possible to achieve the desired impact.
Technical Segmentation
As a complement to vertical slicing, in my teams, we also divide these increments that deliver value to the user into smaller tasks, which we also deploy to production. These tasks are more technically focused and usually take less than two or three hours.
Deploying these technical increments allows us to obtain feedback primarily from the system: Does our CI pipeline continue to work well? Does the deployed code cause any obvious problems? Does it affect performance in any way?
This practice forces us to maintain a low deployment cost (in terms of time and effort) and allows us to ensure that the workflow continues to operate correctly at all times. This is possible because we have a solid automated testing system, fast CI pipelines, and we work with Continuous Integration/Trunk-Based Development, as we will explain later.
Being able to apply this technical segmentation is also essential for making parallel changes, implementing significant modifications in small and safe steps, and thereby significantly reducing risk.
Generating Options
Generating options is essential for making well-founded decisions. Every decision should consider multiple alternatives; we usually try to have at least three or four. To facilitate the generation of options, we can ask ourselves questions such as:
- What other options would you consider if you had half the time?
- Which options require new dependencies?
- What solutions have you implemented in similar problems in the past?
- What is the minimum degree of sophistication required for the solution?
- Who could benefit from the change? Could we deliver it to each user group independently?
These questions help us generate options that the team can then evaluate, always trying to select those that quickly provide value (learning, capacity, uncertainty reduction, etc.) while committing as little as possible.
This way of working allows us to move forward in small steps, always maintaining visibility over the different options we can take to continue addressing the problem or redirect it if the steps taken aren’t achieving the desired impact. As you can see, everything converges into working with small advances, learning, making decisions as late as possible, and striving for the simplest solutions.
One tool we use often for generating options and performing vertical slicing is the “hamburger method” by Gojko Adzic.
With this method, we aim to divide a functionality or solution into the steps necessary to provide value to the user. These steps are visualized as “layers” of the hamburger, and for each layer, we force ourselves to generate at least three or four options. Then we select at least one option from each layer to decide which will be the first increment to implement. Once that first increment is implemented and delivered, and with user feedback in hand, we repeat the process to implement one of the other options.
This continuous process doesn’t end when we implement all the identified options, but when the functionality is good enough, or there is another functionality or higher-priority problem to invest in. In other words, we invest in what’s most important until the user is satisfied or until a new priority arises.
Simplicity
Simplicity is one of the core values of XP (eXtreme Programming) and, by extension, of well-understood agility. A mantra of agile development is, “Do the simplest thing that could possibly work.” This means starting with the simplest, minimal solution that works, iterating, and improving based on feedback.
The simplest solution is not always the easiest to implement. Sometimes, avoiding unnecessary complexity requires significant effort. True simplicity is the result of conscious design that evolves gradually.
Two-Step Development
Kent Beck advises us to “Do the simplest thing that could possibly work,” but this is often confused with “the first thing that comes to mind” or “the only thing I know how to do.” An effective way to ensure we are choosing the simplest option possible is to divide any change or increment into two parts:
- Preparation: Adjust the current codebase so the new functionality can be introduced easily.
- Implementation: Introduce the actual change.
This separation avoids speculative design and ensures that only the minimum necessary changes are made to integrate the new functionality, following Kent Beck’s principle:
“Make the change easy, then make the easy change.”
https://twitter.com/KentBeck/status/250733358307500032
YAGNI (You Aren't Gonna Need It)
Closely related to the above point, the YAGNI principle reminds us that many ideas we come up with will likely never be needed. It encourages us to focus only on what we need *now* and helps us avoid speculative design, keeping us focused on what is truly relevant at the moment.
Even when we identify something that might be needed in the near future, YAGNI prompts us to question whether it is truly essential for current needs, reminding us to postpone it. If the system is simple and easy to evolve, it will be easy to introduce those changes later.
Test-Driven Development (TDD) and Outside-In TDD
Test-Driven Development (TDD) is a practice that involves writing a test first to define the desired behavior of a functionality, before writing the code to implement it. From there, the developer writes the minimum code necessary to pass the test, followed by a refactoring process to improve the code design without changing its behavior. This cycle is repeated continuously, ensuring that every line of code has a clear and defined purpose, avoiding unnecessary or superfluous code.
Outside-In TDD is a variation of TDD that starts from the broadest business use cases and works its way inward to the system's implementation. By starting from business needs and writing only the code necessary to pass each test at each level (from the highest functional level to the individual pieces of code), this approach ensures that only essential code is created. It prevents unnecessary code or features that are not currently required, avoiding speculative design and adhering to the YAGNI principle.
In our team, we use Outside-In TDD as the default workflow for all new code, except in cases where this flow isn’t beneficial (e.g., spikes, complex algorithms, etc.). This means that approximately 5-10% of the code may be experimental for learning purposes, which is discarded afterward and typically isn’t tested. Another 10% corresponds to tasks where tests are written afterward (e.g., library integrations or complex algorithms). The remaining majority of the code is developed using Outside-In TDD.
This approach minimizes waste and inherently adheres to the YAGNI principle since no code or design is created that doesn’t align with the current increment. As the current increment is defined through radical vertical slicing, we work in small steps, with minimal waste, and make decisions as late as possible.
An additional advantage of this process is that it facilitates quick error resolution, both in code and design, as progress is constantly verified step by step. When an error is detected, it is most likely in the last test or the last change made, allowing for quick and stress-free recovery.
Continuous Integration (Trunk-Based Development)
If there is one technical practice that forces and helps us work in small steps, with constant feedback, enabling us to decide as late as possible while learning and adapting at maximum speed, it’s Continuous Integration (CI).
First, it’s important to clarify that Continuous Integration is an XP practice in which all team members integrate their code into a main branch frequently (at least once a day). In other words, this practice is equivalent to Trunk-Based Development, where there is only one main branch on which all developers make changes (usually in pairs or teams).
This practice has nothing to do with running automated tests on feature branches. In fact, I would say it is directly incompatible with working on separate branches for each functionality.
Unfortunately, this approach is not the most common in the industry, but I can assure you that, along with TDD, it is one of the practices that has the most impact on teams. In every team I’ve worked with, the introduction of Continuous Integration/TBD has caused a spectacular change. It has forced us to work in very small (but safe) steps, giving us the agility and adaptability we sought.
Of course, like any practice, it requires effort and the learning of tactics to frequently deploy to production without showing incomplete functionalities to the user. It’s necessary to master strategies that separate deployment (technical decision) from the release to users (business decision). The most common strategies are:
- Feature toggles: Allow features to be turned on or off, perform A/B testing, or show new features only to certain clients (internal, beta testers, etc.).
- Gradual deployment: Methods like canary releases or ring deployments allow for a progressive rollout of changes.
- Dark launches: Launch a feature without making it visible to the client, only to perform performance or compatibility tests.
- Shadow launches: Run a new algorithm or process in parallel with the old one, but without showing results to the end user.
Evolutionary Design
This central XP practice allows us to develop software incrementally, continuously refactoring the design so it evolves according to business needs. In practice, it involves creating the simplest possible design that meets current requirements and then evolving it in small steps as we learn and add new functionalities.
Within evolutionary design, tactics include:
- Two-step development.
- Continuous refactoring in the TDD cycle.
- Opportunistic refactoring.
- Avoiding premature abstractions ([See: https://www.eferro.net/2017/02/applying-dry-principle.html)).
- Parallel changes to keep tests green while making multi-step changes.
- Branch by abstraction and the Expand/Contract pattern to facilitate parallel changes.
It’s important to note that beyond the tactics you use to guide the design in small steps, it’s essential to develop a sense of design within the team. None of these practices alone teach object-oriented design. Therefore, the team must not only learn to make incremental design changes but also acquire a deep understanding of object-oriented design principles.
Differentiated Evolutionary Design
In general, in my teams, we always try to work in small steps, focusing on what we need at the moment and letting new needs guide changes in architecture and design. At the same time, we recognize that the ease of evolution and the friction generated by change depend heavily on the type of code being affected. We know that modifying code that implements business rules, an internal API between teams, or a customer-facing API are not the same in terms of friction.
Each of these cases involves varying degrees of friction to change (i.e., different levels of ease of evolution). Therefore, we apply a differentiated evolutionary design approach based on the type of code.
For code with higher friction to change, such as a customer-facing API, we dedicate more time to a robust design that allows for evolution without requiring frequent changes. Conversely, for internal business logic code that is only used in specific cases, we adopt a more flexible evolutionary approach, allowing the design to emerge naturally from the development process.
Other Tactics and Practices
Of course, these are not the only tactics and practices to consider, but I do believe they are the ones that help us the most. Here are some additional tips and heuristics that, while not full-fledged practices in themselves, contribute to decision-making and generally make it easier to work in small steps and postpone decisions as much as possible:
- Prioritize libraries over frameworks to avoid locking in options and maintain greater flexibility.
- Focus on making code usable (and understandable) rather than reusable, unless your business is selling libraries or components to other developers.
- Use solid, “boring” technology that is widely accepted by the community.
- Create thin wrappers over external components/libraries to clearly define which parts of a component are being used and to facilitate testing. You can learn more about this approach at https://www.eferro.net/2023/04/thin-infrastructure-wrappers.html.
- Separate infrastructure from business code through Ports and Adapters or another architecture that clearly differentiates them.
- Apply evolutionary architecture, starting with a minimal architecture and adapting it to business needs, postponing hard-to-reverse decisions as much as possible.
Conclusions
In software development, the key lies in adopting a conscious approach to our decisions, working in small, low-risk steps, and focusing solely on what we need now. Simplicity and clarity must be priorities to maximize efficiency and minimize waste.
The practices of eXtreme Programming (XP), together with the principles of Lean Software Development, provide us with a clear guide to avoid waste and over-engineering. Understanding that we cannot predict the future with certainty, we focus on building systems that are easy to understand and evolve, avoiding unnecessary complexity. Working this way means steering clear of oversized or highly configurable solutions, which often become obstacles to system evolution.
Ultimately, it’s about being humble: acknowledging that we don’t have all the answers and that the only way to find the right solution is through experimentation and continuous learning. In short, simplicity, agility, and responsiveness are fundamental to developing software effectively in an ever-changing environment.
If I had to choose the techniques and practices that have the greatest impact on my teams for working in small, safe steps and postponing decisions, I would say they are:
- Vertical slicing
- Continuous Integration / Trunk-Based Development
- TDD (Test-Driven Development)
All with a constant focus on simplicity.
Each of the practices and tactics mentioned in this article is broad and could be explored in greater depth. I would love to know if there is interest in delving into any of them further, as it would be enriching to explore in greater detail those that are most useful or intriguing to readers.
References