Exploring how artificial intelligence might be changing the rules of the game in software development - preliminary insights from the trenches
An exploration into uncharted territory
I want to be transparent from the start: what I'm about to share are not definitive conclusions or proven principles. These are open reflections that emerge from a few months of intense personal experimentation with AI applied to software development, exploring its possibilities and trying to understand how this affects the Lean practices we typically use.
These thoughts are not definitive conclusions based on prolonged experience, but rather open reflections that I would like to continue experimenting with and discussing with others interested in this fascinating topic. I'm not speaking as someone who already has the answers, but as someone exploring fascinating questions and suspecting that we're facing a paradigm shift that we're only beginning to understand.
The fundamental paradox: speed versus validation
A central idea I'm observing is that, although artificial intelligence allows us to work faster, this doesn't mean we should automatically expand the initial scope of our functionalities. My intuition tells me that we should continue delivering value in small increments, validate quickly, and decide based on real feedback rather than simply on the speed at which we can now execute tasks.
But there's an interesting nuance I've started to consider: in low-uncertainty contexts, where both the value and implementation are clear and the team is very confident, it might make sense to advance a bit more before validating. However, my feelings lead me to think that maintaining discipline to avoid falling into speculative design is fundamental, because although AI facilitates it, it can jeopardize the simplicity and future flexibility of the system.
The cognitive crisis we don't see coming
Chart: While development speed with AI grows exponentially, our human cognitive capacity remains constant, creating a "danger zone" where we can create complexity faster than we can manage it.
Here I do have a conviction that becomes clearer every day: we should now be much more radical when it comes to deleting and eliminating code and functionalities that aren't generating the expected impact.
What this visualization shows me is something I feel viscerally: we have to be relentless to prevent complexity from devouring us, because no matter how much AI we have, human cognitive capacity hasn't changed - both for managing technical complexity and for users to manage the growing number of applications and functionalities.
We're at that critical point where the blue line (AI speed) crosses the red line (our capacity), and my intuition tells me that either we develop radical disciplines now, or we enter that red zone where we create more complexity than we can handle.
The paradox of amplified Lean
But here's the crux of the matter, and I think this table visualizes it perfectly:
Table: AI eliminates the natural constraints that kept us disciplined (at least some of us), creating the paradox that we need to artificially recreate those constraints through radical discipline.
This visualization seems to capture something fundamental that I'm observing: AI eliminates the natural constraints that kept us applying Lean principles. Before, the high cost of implementation naturally forced us to work in small batches. Now we have to recreate that discipline artificially.
For example, look at the "Small Batches" row: traditionally, development speed was the natural constraint that forced us to validate early. Now, with AI, that brake disappears and we risk unconscious scope growth. The countermeasure isn't technical, it's cultural: explicitly redefining what "small" means in terms of cognitive load, not time.
The same happens with YAGNI: before, the high cost of implementation was a natural barrier against speculative design. Now AI "suggests improvements" and makes overengineering tempting and easy. The answer is to make YAGNI even more explicit.
This is the paradox that fascinates me most: we have to become more disciplined precisely when technology makes it easier for us.
From this general intuition, I've identified several specific patterns that concern me and some opportunities that excite me. These are observations that arise from my daily experimentation, some clearer than others, but all seem relevant enough to share and continue exploring.
About scope and complexity
Change in the "default size" of work
AI facilitates the immediate development of functionalities or refactors, which can unconsciously lead us to increase their size. The risk I perceive is losing the discipline of small batch size crucial for early validation.
Ongoing exploration: My intuition suggests explicitly redefining what "small" means in an AI context, focused on cognitive size and not just implementation time. One way to achieve this is by relying on practices like BDD/ATDD/TDD to limit each cycle to a test or externally validable behavior.
Amplified speculative design
On several occasions I've had to undo work done by AI because it tries to do more than necessary. I've observed that AI lacks sensitivity to object-oriented design and has no awareness of the complexity it generates, creating it very quickly until reaching a point where it can't escape and enters a loop, fixing one thing and breaking others.
Reflection: This suggests reinforcing deliberate practices like TDD, walking skeletons, or strict feature toggles.
New type of "overengineering"
My initial experience suggests that the ease AI offers can lead to adding unnecessary functionalities. It's not the classic overengineering of the architect who designs a cathedral when you need a cabin. It's more subtle: it's adding "just one more feature" because it's easy, it's creating "just one additional abstraction" because AI can generate it quickly.
Key feeling: Reinforcing the YAGNI principle even more explicitly seems necessary.
About workflow and validations
Differentiating visible work vs. released work
My experience indicates that rapid development shouldn't confuse "ready to deploy" with "ready to release." My feeling is that keeping the separation between deployment and release clear remains fundamental.
I've also developed several times small functionalities that then weren't used. Although, to be honest, since I have deeply internalized eliminating waste and baseline cost, I simply deleted the code afterwards.
Opportunity I see: AI can accelerate development while we validate with controlled tests like A/B testing.
More work in progress, but with limits
Although AI can allow more parallel work, my intuition tells me this can fragment the team's attention and complicate integration. It's tempting to have three or four features "in development" simultaneously because AI makes them progress quickly.
My current preference: Use AI to reduce cycle time per story, prioritizing fast feedback, instead of parallelizing more work.
Change in the type of mistakes we make
My observations suggest that with AI, errors can propagate quickly, generating unnecessary complexity or superficial decisions. A superficial decision or a misunderstanding of the problem can materialize into functional code before I've had time to reflect on whether it's the right direction.
Exploration: My intuition points toward reinforcing cultural and technical guardrails (tests, decision review, minimum viable solution principle).
About culture and learning
Impact on culture and learning
I feel there's a risk of over-relying on AI, which could reduce collective reflection. Human cognitive capacity hasn't changed, and we're still better at focusing on few things at a time.
Working intuition: AI-assisted pair programming, ownership rotations, and explicit reviews of product decisions could counteract this effect.
Ideas I'm exploring to manage these risks
After identifying these patterns, the natural question is: what can we do about it? The following are ideas I'm exploring, some I've already tried with mixed results, others are working hypotheses I'd like to test. To be honest, we're in a very embryonic phase of understanding all this.
Discipline in Radical Elimination My intuition suggests introducing periodic "Deletion Reviews" to actively eliminate code without real impact. Specific sessions where the main objective is to identify and delete what isn't generating value.
"Sunset by Default" for experiments The feeling is that we might need an explicit automatic expiration policy for unvalidated experiments. If they don't demonstrate value in X time, they're automatically eliminated, no exceptions.
More rigorous Impact Tracking My experience leads me to think about defining explicit impact criteria before writing code and ruthlessly eliminating what doesn't meet expectations in the established time.
Fostering a "Disposable Software" Mentality My feeling is that explicitly labeling functionalities as "disposable" from the start could psychologically facilitate elimination if they don't meet expectations.
Continuous reduction of "AI-generated Legacy" I feel that regular sessions to review automatically generated code and eliminate unnecessary complexities that AI introduced without us noticing could be valuable.
Radically Reinforcing the "YAGNI" Principle My intuition tells me we should explicitly integrate critical questions in reviews to avoid speculative design: "Do we really need this now? What evidence do we have that it will be useful?"
Greater rigor in AI-Assisted Pair Programming My initial experience suggests promoting "hybrid Pair Programming" to ensure sufficient reflection and structural quality. Never let AI make architectural decisions alone.
A fascinating opportunity: Cross Cutting Concerns and reinforced YAGNI
Beyond managing risks, I've started to notice something promising: AI also seems to open new possibilities for architectural and functional decisions that traditionally had to be anticipated from the beginning.
I'm referring specifically to elements like:
- Internationalization (i18n): Do we really need to design for multiple languages from day one?
- Observability and monitoring: Can we start simple and add instrumentation later?
- Compliance: Is it possible to build first and adapt regulations later?
- Horizontal scalability and adaptation to distributed architectures: Can we defer these decisions until we have real evidence of need?
My feeling is that these decisions can be deliberately postponed and introduced later thanks to the automatic refactoring capabilities that AI seems to provide. This could further strengthen our ability to apply YAGNI and defer commitment.
The guardrails I believe are necessary
For this to work, I feel we need to maintain certain technical guardrails:
- Clear separation of responsibilities: So that later changes don't break everything
- Solid automated tests: To refactor with confidence
- Explicit documentation of deferred decisions: So we don't forget what we deferred
- Use of specialized AI for architectural spikes: To explore options when the time comes
But I insist: these are just intuitions I'd like to validate collectively.
Working hypotheses I'd love to test
After these months of experimentation, these are the hypotheses that have emerged and that I'd love to discuss and test collectively:
1. Speed ≠ Amplitude
Hypothesis: We should use AI's speed to validate faster, not to build bigger.
2. Radical YAGNI
Hypothesis: If YAGNI was important before, now it could be critical. Ease of implementation shouldn't justify additional complexity.
3. Elimination as a central discipline
Hypothesis: Treat code elimination as a first-class development practice, not as a maintenance activity.
4. Hybrid Pair Programming
Hypothesis: Combining AI's speed with human reflection could be key. Never let AI make architectural decisions alone.
5. Reinforced deployment/release separation
Hypothesis: Keep this separation clearer than ever. Ease of implementation could create mirages of "finished product."
6. Deferred cross-cutting concerns
Hypothesis: We can postpone more architectural decisions than before, leveraging AI's refactoring capabilities.
An honest invitation to collective learning
Ultimately, these are initial ideas and reflections, open to discussion, experimentation, and learning. My intuition tells me that artificial intelligence is radically changing the way we develop products and software, enhancing our capabilities, but suggesting the need for even greater discipline in validation, elimination, and radical code simplification.
My strongest hypothesis is this: AI amplifies both our good and bad practices. If we have the discipline to maintain small batches, validate quickly, and eliminate waste, AI could make us extraordinarily effective. If we don't have it, it could help us create disasters faster than ever.
But this is just a hunch that needs validation.
What experiences have you had? Have you noticed these same patterns, or completely different ones? What practices are you trying? Have you noticed these same effects in your teams? What feelings does the integration of AI in your Lean processes generate for you?
We're in the early stages of understanding all this. We need perspectives from the entire community to navigate this change that I sense could be paradigmatic, but that I still don't fully understand.
Let's continue the conversation. The only way forward is exploring together.
Do these reflections resonate with you? Have you noticed similar or completely different patterns? I'd love to hear about your experience and continue learning together in this fascinating and still unexplored territory.
No comments:
Post a Comment