Sunday, February 22, 2026

Podcast: AI as an Amplifier. Why Engineering Practices Matter More Than Ever

Vasco Duarte invited me to be part of the Scrum Master Toolbox Podcast's AI Assisted Coding series, and I couldn't pass up the chance to talk about something I've been living and thinking about intensely for the past several months.

The conversation builds directly on the experiment I documented in Fast Feedback, Fast Features: My AI Development Experiment: 424 commits over 11 weeks, where for every unit of effort I put into new features, I invested four times more in refactoring, cleanup, tests, and simplification. And yet, globally, I think I more or less doubled my pace of work.

In the episode, we dig into several things I've been exploring:

Vibe coding vs production AI development. Both are valid—but they require different mindsets. Vibe coding is flow-driven, exploration-focused, great for prototypes and discovery. Production AI coding demands architectural thinking, security analysis, and sustainability practices. Even vibe coding benefits from engineering discipline as soon as experiments grow beyond a weekend hack.

The positive spiral of code removal. One of the most powerful patterns I've discovered is using AI to accelerate deletion. Connect product analytics to identify unused features, use AI to remove them efficiently, and you trigger a cycle: simpler code makes architecture changes cheaper, cheaper architecture changes enable faster feature delivery, which creates more opportunities for simplification. Humans historically avoided this because removal was as expensive as creation. That excuse is gone.

Preparing the system before introducing change. Rather than asking "implement this feature," I've been asking "how should I change my system to make this feature trivial to introduce?" AI makes that preparation cheap enough to do routinely. The result: systems that evolve cleanly rather than accumulating debt with each addition.

AI as an amplifier—the double-edged sword. This is the central idea. AI doesn't replace engineering judgment; it magnifies its presence or absence. Strong teams will see accelerated improvement. Teams without good practices will generate technical debt faster than ever. The path to excellence in modern software development lies in the seamless integration of a high-performance engineering culture, lean-agile product strategies, and an evolutionary approach to architecture. AI makes that path wider—but you still have to choose to walk it.

🎙️ Listen to the episode: AI as an Amplifier—Why Engineering Practices Matter More Than Ever

Sunday, January 18, 2026

Fast Feedback, Fast Features: My AI Development Experiment

What happens when you use AI not to ship faster, but to build better? I tracked 424 commits over 11 weeks to find out.

The Experiment

Context first: I'm an engineering manager, not a full-time developer. These 424 commits happened in the time I could carve out between meetings, planning, and leadership work. The applications are production internal systems (monitoring dashboards, inventory management, CLI tools, chatbot backends) used by real teams, but not high-criticality systems where a bug directly impacts external customers or revenue.

Important nuance: I also act as Product Manager for the Platform team that owns these applications. This means I'm defining the problems and implementing the solutions. There's no friction or information loss between problem definition and implementation that typically exists in stream-aligned teams where PM and developers are separate roles. This setup favors faster iteration and tighter feedback loops (though it's worth noting this isn't representative of how most teams operate).

From November 2025 to January 2026, I wrote 424 commits across 6 repositories, spanning 44 active days (with Christmas holidays in the middle). Every single line of code was written with AI assistance: Cursor, Claude Code, the works. These weren't toy projects or weekend experiments. These were real systems evolving under active use.

The repositories varied wildly in maturity: from a 13-day-old Go service to a 5.6-year-old Python system with over 12,000 commits in its history. Half were greenfield projects under 6 months old; half were mature codebases years into their lifecycle. Combined, they represent ~107,000 lines of production code. These are small-to-medium projects. That's how our platform team works: we prefer composable systems over monoliths.

The period was intense: 9.6 commits per day average, almost double my historical pace. But AI didn't just make me faster at writing code. It fundamentally changed what kind of code I wrote.

I tracked everything. Every commit was categorized using a combination of commit message analysis, file change patterns, and manual review. Claude Sonnet 4.5 helped automate the initial categorization, which I then validated. And when I analyzed the data, I found something I wasn't expecting.

The Balance

For every hour I spent on new features, I spent over four hours on tests, documentation, refactoring, security improvements, and cleanup.

22.7% functionality. 98.3% sustainability.

Yes, that adds up to more than 100%. That's not an error: it's the reality of how development actually works. When I develop a feature, the same commit often includes tests, documentation updates, and code cleanup. The numbers reflect that commits are multidimensional, not mutually exclusive categories.

The ratio: 0.23:1 (Functionality:Sustainability)

This wasn't accidental. This was a deliberate experiment in sustainable velocity. And AI made it possible.

Breaking Down the 98.3%

8-Dimensional Commit Categorization

When I say "sustainability," I mean 8 specific, measurable categories:

  • Tests: 30.7%: The largest single category
  • Documentation: 19.0%: READMEs, API docs, inline comments
  • Cleanup: 13.8%: Removing dead code, unused features, simplification
  • Infrastructure: 12.0%: CI/CD, scripts, tooling improvements
  • Refactoring: 11.5%: Structural improvements, better abstractions
  • Configuration: 8.1%: Environment variables, settings, build configs
  • Security: 3.2%: Vulnerability fixes, security audits, input validation

These aren't "nice-to-haves." They're the foundation that makes the 22.7% of new functionality actually sustainable.

What Changed (And What Didn't)

Here's what I learned: tests and feedback loops were always important. Good engineers always knew this. The barrier wasn't understanding, it was economics and time.

What was true before AI:

  • Fast feedback loops were critical for velocity
  • Comprehensive tests enabled confident iteration
  • Documentation reduced knowledge silos
  • Some teams invested in this, many didn't grasp that sustainable software requires sustained investment in technical practices

What changed with AI:

  • The barrier to entry dropped dramatically
  • Building that feedback infrastructure became fast
  • Maintaining quality became economically viable for small teams
  • The excuse of "not enough time" largely disappeared

What didn't change:

  • Discipline is still our responsibility
  • The choice to balance features vs sustainability is still ours
  • AI doesn't automatically make us write tests: we have to choose to
  • The default behavior is still "ship more features faster" until technical debt forces a halt

The insight: AI removed the last excuse. Now it's about discipline, not capability.

For me, as a manager who codes in limited time, this changed everything. I can afford to build the feedback infrastructure that lets me iterate fast. The 0.23 ratio isn't a constraint, it's what enables the velocity I'm experiencing.

Negative Code: Simplification as a Feature

Here's another data point: 55,407 lines deleted out of 135,485 total lines changed.

That's 40.9% deletions. For every 3 lines I wrote, I deleted 2.

Some deletions were refactoring: replacing 100 lines of messy code with 20 clean ones. But many were something else: removing features that didn't provide enough value.

One repository, chatcommands, has net negative growth: the codebase got smaller despite active development. It's not alone. ctool also shrank during this period.

This connects to two concepts I've written about before:

Basal Cost of Software: Every line of code has an inherent maintenance cost. It needs to be understood, tested, debugged, and updated. The best way to reduce basal cost is to have less code.

Radical Detachment: Software is a liability to minimize, not an asset to maximize. The goal isn't more code, it's the right amount of code to solve the problem.

Before AI, deleting features was expensive:

  • Understanding old code took hours (documentation outdated)
  • Tracing dependencies was manual and error-prone
  • Verifying nothing broke required incomplete test suites
  • Updating docs and configs was tedious

Features became immortal. Once added, they never left, even at zero usage.

With AI, deletion becomes viable:

  • Trace dependencies in minutes, not hours
  • Comprehensive tests catch breaking changes immediately
  • Documentation updates happen alongside code changes
  • The entire deletion commit includes proper cleanup

The 13.8% cleanup category isn't just removing dead imports. It's removing dead features. Entire endpoints. Unused UI components. Configuration options nobody sets.

I call this Negative Velocity: making the codebase smaller, simpler, and faster, not just adding more.

This aligns with lean thinking about waste elimination. Every unused feature is waste: it increases build times, slows down tests, complicates mental models, and raises the basal cost of the system. Each line of code creates drag on everything else. By deleting features, we're not just cleaning up: we're reducing the ongoing cost of ownership. Fewer features means faster comprehension, simpler debugging, easier onboarding, and less surface area for bugs.

I'd deleted code before, but AI reduced the friction enough to make it routine instead of occasional. Deletion went from expensive to viable. We can finally afford to minimize the liability at the pace it deserves.

The best code is no code. Now we can actually afford to delete it.

The Metrics at a Glance

The key numbers:

  • 424 total commits across 44 active days (November 2025 - January 2026)
  • 9.6 commits per day average: nearly double typical velocity
  • Ratio Func:Sust = 0.23:1 (1 hour features, >4 hours sustainability)
  • Average Functionality: 22.7% per commit
  • Average Sustainability: 98.3% per commit (multidimensional, not mutually exclusive)
  • 135,485 total lines changed (80,078 insertions, 55,407 deletions)
  • 40.9% deletion ratio: for every 3 lines written, 2 deleted

These aren't aspirational numbers. These are the actual patterns from an intensive 11-week period of AI-assisted development in production repositories.

Different Projects, Different Profiles

Not every project should have the same ratio. Context matters.

  • inventory: 0.42:1 ratio: More feature-focused, greenfield project in active development
  • plt-mon: 0.25:1 ratio: Test-heavy, mature monitoring system needing reliability
  • ctool-cli: 0.16:1 ratio: CLI tool with emphasis on tests and robustness
  • chatcommands: 0.15:1 ratio: Maintenance-focused, net negative code growth (-1,809 lines)
  • ctool: 0.09:1 ratio: Minimal feature work, heavy focus on infrastructure and cleanup
  • cagent: 0.13:1 ratio: New project with emphasis on quality from day one

The chatcommands profile is particularly interesting: 31.5% of effort went to cleanup, and the repository actually shrank by 1,809 lines over this period. This isn't a dying project, it's a maturing one. Features were removed intentionally because they weren't providing value. The codebase got simpler, faster, and more maintainable.

The plt-mon repository maintains a 1.15:1 test-to-feature ratio: tests slightly outpace features. This is a production monitoring system where reliability matters, and the balance reflects steady feature growth with corresponding test coverage.

The ratio should reflect the project's phase and needs. AI makes all of these profiles viable without sacrificing quality or velocity.

What I Learned

After 11 weeks and 424 commits, here's what I've discovered:

Real velocity comes from fast feedback loops. Not from writing code faster, but from being able to iterate confidently and quickly. The 98.3% investment in sustainability isn't overhead, it's what enables speed.

AI changed what became economically viable. Before, building comprehensive test coverage as a manager with limited coding time would have been impossible. Now I can afford to build both the features and the safety net at sustainable pace. The barrier dropped; the discipline remains my responsibility.

Speed ≠ Velocity. Speed is how fast you move. Velocity is speed in the right direction. A team shipping 10 features per week with zero tests is moving fast toward a rewrite. A team shipping 3 features per week with comprehensive test coverage is moving fast toward sustainability.

What you optimize for gets amplified. My hypothesis: AI amplifies our choices. If you optimize for feature velocity, you'll accumulate technical debt faster. If you optimize for sustainable velocity (balancing features with quality infrastructure) you'll build healthier systems faster. I've seen this play out in my own work, though I don't claim this is universal.

Deletion is a feature. With lower barriers to understanding and changing code, we can finally afford to make codebases smaller. Net negative growth isn't stagnation, it's maturity.

The right ratio depends on context. My 0.23:1 ratio works for internal systems with moderate criticality, developed by a manager in limited time. Your context is different. The point isn't to copy my numbers, it's to be intentional about the balance.

This is still an experiment. I don't know if this approach scales to all teams or all types of systems. What I do know: for my context, over these 11 weeks, this balance produced the fastest sustainable velocity I've experienced in my career.

The shift wasn't learning new practices—I'd practiced TDD and built for sustainability for years. But as a manager coding in limited time, I always had to compromise. I wrote tests, but not as many as I wanted. I refactored, but not as thoroughly. I documented, but not as completely. AI didn't change what I valued—it changed what I could afford to do. The discipline I'd always practiced could finally match the standard I'd always wanted.

Your Turn

I don't have universal answers. But I do have a suggestion:

Measure your balance. Be intentional about it.

Track your next month of commits. Categorize them honestly. Calculate your Functionality:Sustainability ratio.

The number itself matters less than the awareness. Are you making conscious choices about where AI velocity goes? Are you building the feedback infrastructure that enables sustainable speed? Are you just shipping faster, or are you building better systems faster?

For me, the answer has been clear: investing heavily in tests, documentation, and simplification has made me faster, not slower. The 98.3% isn't overhead, it's the engine.

Your mileage may vary. Your context is different. But the question is worth asking:

What kind of engineering does AI make viable for you that wasn't before?

Related Posts

Saturday, January 03, 2026

Stop Building Software. Start Cultivating It

There's a pervasive anxiety in the software industry. It's the feeling of never being good enough, of working late with the constant worry that what you just shipped will explode in production. It's the pressure to go faster, even when you know you're sacrificing quality. It's the frustration of feeling unprofessional, of never quite reaching a state of sustainable, high-impact work. Many of us have been there, living with this constant, low-grade stress.

But there is a better way. There is a path to professional tranquility that also delivers greater business impact. It doesn't come from a new framework or the latest methodology, but from a fundamental shift in how we think about our work. After nearly three decades in this industry, I've come to rely on five counter-intuitive but powerful mindset shifts.

1. Stop "Building" Software. Start Cultivating It.

One of the most damaging ideas I've encountered in our industry is the metaphor of software development as construction. We talk about "building" applications like we build houses. This metaphor is flawed, and it is the root cause of immense dysfunction.

It's harmful because it implies a static, finished product. A house, once built, is largely done. This mindset separates the "building" phase from a supposedly smaller "maintenance" phase. It leads to the absurd but common belief that software must be thrown away and completely rebuilt every few years.

We need a new metaphor: software as something that evolves, like a garden or a living system. It must be cultivated and guided. The single greatest advantage of software is its malleability, its ability to change. The construction metaphor negates this core strength. We are not masons laying permanent bricks; we are gardeners tending to a system that must constantly adapt to its environment.

If we saw the nature of software as something more evolutionary, as something that is alive, as a system that we are modifying all the time... that metaphor seems to me to be much more in line with the real nature of software development.

2. Your Most Valuable Contribution Might Be the Code You Didn't Write

Our industry often rewards the wrong things. Résumés are filled with lists of technologies used and massive projects "built." Productivity is mistakenly measured by the quantity of code written. More features, more complexity, and more lines of code are seen as signs of progress.

The truth is that true impact often comes from simplification. The most valuable work an engineer can do is often invisible. It's achieving an 80/20 solution that delivers most of the value with a fraction of the effort. It's proposing a simpler path that avoids a six-month project. It's having the courage to delete a feature that adds more cost than value.

Every feature, every line of code, has a "basal cost", an ongoing tax on the system. It adds to the cognitive load for new developers, increases maintenance, and creates friction that slows down all future innovation. The best engineers are masters of preventing unnecessary complexity. Their biggest wins, a disastrous project averted, a legacy system retired, will never appear on a performance review, but they are immensely valuable.

3. Agility Is a Strict Discipline, Not a Free-for-All

The word "Agile" has been misinterpreted to the point of becoming meaningless in many organizations. Teams use "delivering value" as an excuse for shipping shoddy work at high speed. For others, "being agile" has come to mean "anything goes", no documentation, no planning, no rigor.

This interpretation is a complete departure from the concept's original intent. True agility is a difficult and demanding discipline. It is not a shortcut or an excuse for chaos. It is a rigorous commitment to practices that enable sustainable speed and responsiveness.

In my head, agility is equivalent to a discipline, and a difficult one at that... a discipline of "Hey, I'm going to write the test first, then I'll do the implementation, then I'll refactor, I'll even eliminate functionalities that aren't useful." It's actually a tough discipline.

This discipline is embodied in concrete, systematic practices. On the technical side, it means Test-Driven Development (TDD) as the foundation for design and quality, Continuous Integration (CI) and Continuous Delivery (CD) to enable rapid, safe deployment, and continuous refactoring to keep the codebase simple and maintainable. On the product side, it means applying Lean Product Development principles to validate ideas before committing to full implementation, running experiments to test hypotheses, and ruthlessly prioritizing based on real user feedback.

But as the above highlights, it's not just about additive practices. It's also a discipline of subtraction, of proactively controlling technical debt before it becomes a crisis, of simplifying systems even when they're "working," and of removing features that don't add value. The goal is to maintain agility, both from a technical perspective (the codebase remains easy to change) and from a product perspective (the team can pivot based on what they learn).

These are not optional extras; they are the very foundation of agility. They require an uncompromising commitment to quality, because it is only through high quality that we can earn the ability to move fast, adapt, and innovate sustainably over the long term.

4. Individual Performance Metrics Are a Trap

There is a growing and dangerous trend of trying to measure individual developer productivity with simplistic metrics like the number of commits, pull requests, or story points completed. This approach is devastating. In 2023, McKinsey published an article proposing a framework to measure individual developer productivity using metrics like "contribution analysis" and "inner/outer loop time spent." The response from the software engineering community was swift and unequivocal. Kent Beck, creator of Extreme Programming, called it "absurdly naive," while Kent Beck and Gergely Orosz wrote a detailed rebuttal explaining why such frameworks do far more harm than good.

The problem isn't just with McKinsey's specific approach. It's with the entire premise of measuring individuals in a collaborative discipline. This is a direct application of Taylorism, the management philosophy developed by Frederick Winslow Taylor in the early 1900s for factory work. Taylorism treats people as interchangeable resources to be optimized locally, decomposing work into specialized tasks and measuring each person's individual output. It took manufacturing 50 years to move past this thinking. Yet in software development, a creative, knowledge-based discipline where these ideas are least effective, we continue to apply them universally.

When you incentivize local optimization through individual metrics, the negative results are entirely predictable. As W. Edwards Deming taught, over 90% of an organization's performance is a result of the system, not the individuals within it. But individual metrics create perverse incentives that optimize for the wrong things. We reward behaviors that are easy to measure but destructive to the whole: a high number of commits (encouraging smaller, more frequent check-ins regardless of value), celebrating people for "being 100% busy" (even if they're blocking others), or lionizing "heroes" who constantly put out fires (often of their own making). These incentives inevitably lead to chronic bottlenecks, constant rework, and a toxic hero culture where knowledge is hoarded and collaboration is discouraged.

It is devastating because it promotes individualism in a profession that is fundamentally about collaborative problem-solving. It discourages essential practices like pair programming because it makes it harder to assign "credit." It optimizes for busy-ness (output) instead of actual business results (impact). As Beck and Orosz point out, measuring effort and output is easy, but it changes developer behavior in ways that work against the outcomes that actually matter.

Software development is a team sport centered on learning. The true measure of performance is the impact and health of the team. The most valuable person on a team would often score terribly on these individual metrics. They might be the "glue" that holds everyone together, the mentor who elevates the skills of others, the person who prevents bad code from ever being written, or the engineer who just deleted 10,000 lines of obsolete code, making the system simpler for everyone. Their contribution is profound, yet these metrics would render them invisible or, worse, label them a poor performer.

5. Don't Ask for Permission to Be a Professional

Too many engineers wait for permission to do their job properly. They see quality practices like writing automated tests as something they need to negotiate or justify to management. This is a fundamental mistake.

You don't ask your manager for permission to use a for loop or a recursive function; you use the right tool for the job because you are a professional. Writing tests is the same. It is a non-negotiable, foundational part of professional software development, not an optional feature you need to bargain for.

This responsibility extends beyond just testing. Your job is not merely to execute instructions. It is to solve problems. That means taking the initiative to understand the "why" behind a feature, respectfully challenging requirements that don't make sense, and proposing simpler, better solutions. This isn't overstepping; it is the very core of engineering. By taking this professional responsibility, you build trust, earn autonomy, and position yourself to make a real, lasting impact.

Conclusion: A More Sustainable Future

The key to a more sustainable, impactful, and professionally satisfying career is to abandon the "construction" mindset. When we stop thinking of ourselves as builders of static artifacts and start seeing ourselves as cultivators of living, evolving systems, everything changes.

This single shift in perspective is the thread that connects all five of these principles. It leads us to value subtraction over addition, to embrace discipline over chaos, to measure team impact over individual output, and to take ownership of our professional standards. It is the path away from anxiety and toward durable, meaningful work.

The arrival of AI does not invalidate these principles; it reinforces them. If anything is going to change, it's that these mindset shifts will become more critical, not less. When we can generate code faster than ever, distinguishing between building and cultivating becomes more important than ever. When AI can produce features at unprecedented speed, knowing what NOT to build becomes the differentiating skill. The tools evolve, the disciplines adapt, but the goal remains constant: sustainable impact over time, real value delivered, and complexity kept under control. Now we have new tools and evolving disciplines, but they're still pursuing the same fundamental objective.

What is one "construction" habit you can challenge in your team this week to start cultivating your software instead?

References

  • Stop Building Waste: 6 Principles for High-Impact Engineering Teams Eduardo Ferro (2025)
    eferro.net/stop-building-waste
    A complementary piece exploring how to maximize outcomes while minimizing software complexity.
  • Perverse Incentives, Predictable Results: When Your System Sabotages Your Teams Eduardo Ferro (2025)
    eferro.net/perverse-incentives
    Explores how Taylorist thinking creates perverse incentives in software development and offers systemic solutions.
  • Measuring developer productivity? A response to McKinsey Kent Beck and Gergely Orosz (2023)
    Kent Beck's version | Gergely Orosz's version
    A detailed rebuttal to McKinsey's framework, explaining why measuring effort and output damages engineering culture.
  • Yes, you can measure software developer productivity McKinsey (2023)
    mckinsey.com
    The original McKinsey article proposing individual developer productivity metrics.
  • Basal Cost of Software Eduardo Ferro (2021)
    eferro.net/basal-cost-of-software
    Introduces the concept of ongoing cognitive and maintenance costs each feature adds to a system.
  • Extreme Programming Explained: Embrace Change Kent Beck (1999, 2nd Ed. 2004), Addison-Wesley
  • Software has diseconomies of scale – not economies of scale Allan Kelly (2015, revised 2024)
    allankelly.net/archives/472
  • The Most Important Thing Marty Cagan (2020), Silicon Valley Product Group
    svpg.com/the-most-important-thing

Tuesday, December 30, 2025

New Site: eferro-talks

I've created a dedicated site to collect all my talks and presentations: eferro-talks

It's a cleaner way to find conference material, with filters by year, language, and core talks.

This joins the rest of my projects at eferro.github.io, where you can also find web apps, custom GPTs, curated resources, and development tools.

The goal is to consolidate scattered material into a single access point while keeping each project with its own identity.

Sunday, December 28, 2025

Stop Building Waste: 6 Principles for High-Impact Engineering Teams

As engineers, we've all been there. We spend weeks, maybe even months, heads-down building a new feature, polishing every detail, and shipping it with pride, only to watch it languish. There's no greater professional waste than to develop something that nobody cares about.

This isn't just a feeling; it's a harsh reality backed by data. A huge portion of the software we build is pure waste. But it doesn't have to be this way. By shifting our perspective on what software is and how we build it, we can transform our teams from feature factories into high-impact innovation engines. Here are six hard truths to get you started.

1. Let's face it: Most of what we build is waste.

The first step is to accept the uncomfortable reality that a large percentage of software features go unused. This isn't an opinion; it's a fact. A well-known study by the Standish Group on custom applications found that a staggering 50% of features are "hardly ever used," and another 30% are "infrequently used."

This isn't just a problem for smaller companies. The best tech giants in the world face the same challenge.

The experience at Microsoft is no different: only about 1/3 of ideas improve the metrics they were designed to improve. Of course there is some bias in that experiments are run when groups are less sure about an idea, but this bias may be smaller than most people think; at Amazon, for example, it is a common practice to evaluate every new feature, yet the success rate is below 50%.

If the top companies, with all their resources and data, have a success rate below 50%, we have to be humble about our own ideas. Many of them are flawed. This means we desperately need a process to filter out the bad ideas before we commit to the expensive process of building them.

2. Think of software as a liability to be minimized.

This might sound counter-intuitive, but it's a critical mindset shift. We tend to think of the code we write as an asset. It's not. It's a liability.

Software is "very expensive to build/maintain/evolve." It's only a "means to achieve impact," not the goal itself. Worse, it has "diseconomies of scale," meaning the more of it you have, the more expensive each part becomes to manage.

This creates a "vicious cycle of software." Your team's capacity is used to build new software. This software immediately adds costs: cognitive load, debugging, monitoring, and architectural complexity. These ongoing costs reduce your team's capacity for future innovation. Because this maintenance cost isn't linear (it grows faster than the size of your codebase), you can quickly find your team spending all its time just keeping the lights on. I call this ongoing cost the "Basal Cost of Software", a term borrowed from basal metabolic rate, where each feature continuously drains team capacity, even when not actively being developed.

The goal of a high-impact team isn't to build more software. The goal is to "Maximize Impact" while actively trying to "Minimize Software."

3. Your engineers are your best source of innovation.

For too long, engineering has been treated as a "feature factory." In this classic model, engineers are seen as "code monkeys" who are handed fully-defined solutions and told to just build them.

This model is incredibly wasteful because it sidelines your single greatest asset. Empowered engineers are the "best single source for innovation and product discovery."

Every engineer on a product team should have a "product mindset" and a sense of "product ownership." This means they think about delivering value, not just features. It means they need to understand user problems deeply, which requires direct contact. Engineers should, from time to time, be in the interviews or making shadowing with the customer. Effective product teams are multidisciplinary, involving the "whole team" in the discovery process, not just product managers or designers working in a silo.

4. Maximize Outcomes, not Outputs.

It's easy to measure progress by the wrong things. This is the critical difference between outputs and outcomes.

  • Outputs: These are the things we create. Functionalities, Requirements, Interfaces, Story Points. They are easy to count but tell you nothing about value.
  • Outcomes: These are the results we want to achieve. Value, Impact, a change in user Behavior, ROI. This is what actually matters.

This distinction represents the next logical step in the evolution of Agile thinking. In the original Agile Manifesto, the authors prioritized "working software over comprehensive documentation." That was a huge step forward twenty years ago, but it's not enough anymore. Today, we need to champion "validated learning over working software."

"Classic" product teams are measured by outputs, which inevitably leads to bloated, low-impact software. Effective teams, on the other hand, focus on maximizing outcomes. Their goal isn't just working software; it's "validated learning." The core principle that drives every decision is simple: "Maximize Outcomes, Minimizing Outputs."

5. Design your systems to enable learning.

If our goal is to validate ideas and learn as quickly as possible, then our technical architecture and practices must be optimized for that goal. Building a high-impact product isn't just about culture; it's about having the technical foundation to support it.

Here are a few key practices that facilitate product discovery:

  • Decouple release from deployment: Releasing a feature to users is a business decision. Deploying code is a technical one. They should not be the same thing. "Feature flags" are the essential tool that separates these two concerns, allowing you to test code in production safely. They are also the foundation for running experiments like A/B testing.
  • Don't fly blind: You can't learn if you can't see. Your system must have robust product instrumentation, metrics, domain events, and operational data. This feedback is essential for understanding user behavior and measuring the impact of your experiments.
  • Create a safe system to learn: Learning requires experimentation, and experimentation requires safety. Your system needs to have a low cost of failure. This is achieved through techniques like canary deployments, a solid experimentation framework, and a blameless culture that encourages trying new things.
  • Enable rapid prototyping: You should be able to validate hypotheses without writing a lot of production-ready code. An extensible architecture with APIs and integrations for nocode solutions empowers the entire team. A product manager or designer with access to open APIs can run dozens of experiments and achieve validated learning without ever needing to change the core production solution.

6. AI changes the game, but not the way you might think.

It's unclear exactly how AI will reshape software development, but one thing is certain: we can now generate code and features faster than ever before. AI coding assistants can produce working software in minutes that might have taken days before.

This sounds like pure upside, but here's the trap: if we simply use AI to build more software faster, we'll only accelerate the vicious cycle. We'll accumulate Basal Cost at an unprecedented rate, drowning our teams in complexity even faster than before.

The real opportunity with AI isn't to build more. It's to learn faster and be more ruthless about what we keep.

AI should enable us to:

  • Run more experiments and validate ideas quickly, iterating solutions before committing to production code.
  • Build prototypes to test hypotheses without the traditional cost of development.
  • Adjust and refine solutions rapidly based on real user feedback.

But this only works if we're radically more aggressive about:

  • Eliminating what doesn't work: If an AI-generated feature doesn't deliver impact, kill it immediately. The lower cost of creation doesn't justify keeping failures around.
  • Controlling complexity: Just because we can build something quickly doesn't mean we should. Every line of code still carries its Basal Cost.

The bottleneck has shifted. It's no longer about how fast we can build. It's about how quickly we can decide what to build and how fast we can learn from what we've built. Teams that master validated learning and ruthless prioritization will thrive. Teams that just use AI to build faster will simply create waste at machine speed.

Conclusion

To break the cycle of building software nobody uses, we have to fundamentally change our approach. We must accept that our job is not just to write code, but to solve problems and deliver value. This requires embracing a new set of principles.

First, recognize that software is a liability to minimize. Second, understand that empowered engineers are the best single source for innovation and product discovery. And finally, ensure that our technical solutions should optimize learning and discovery. By building our teams, culture, and systems around these ideas, we can stop wasting our effort and start building products that truly make an impact.

What is one thing your team could change tomorrow to optimize for learning instead of just delivery?



References

  • The Four Big Risks Marty Cagan (2017), Silicon Valley Product Group
    svpg.com/four-big-risks
  • The Most Important Thing Marty Cagan (2020), Silicon Valley Product Group
    svpg.com/the-most-important-thing
  • Software has diseconomies of scale – not economies of scale Allan Kelly (2015, revised 2024)
    allankelly.net/archives/472
  • Online Experimentation at Microsoft Kohavi, Crook, Longbotham et al. (2009), KDD 2009
    microsoft.com/research
    Documents that only ~1/3 of ideas improve metrics at Microsoft; Amazon's success rate is below 50%.
  • Are 64% of Features Really Rarely or Never Used? Mike Cohn (2015), Mountain Goat Software
    mountaingoatsoftware.com
    Analysis of the Standish Group statistic on feature usage.
  • Extreme Programming Explained: Embrace Change Kent Beck (1999, 2nd Ed. 2004), Addison-Wesley
  • Empowered Product Teams Marty Cagan (2017), Silicon Valley Product Group
    svpg.com/empowered-product-teams
  • Basal Cost of Software Eduardo Ferro (2021)
    eferro.net/basal-cost-of-software

Monday, December 22, 2025

Good talks/podcasts (Dec II)

These are the best podcasts/talks I've seen/listened to recently:
  • You ONLY Get Code LIKE THIS With TDD 🔗 talk notes (Dave Farley) [Continuous Delivery, Software Design, tdd] [Duration: 00:16] This talk explores Software Design as the core of development, illustrating how test-driven development (TDD) serves as a critical design tool for achieving modularity, cohesion, and continuous structural improvement.
  • Building effective engineering teams; lessons from 10 years at Google | Addy Osmani 🔗 talk notes (Addy Osmani) [Agile, Continuous Delivery, Engineering Culture] [Duration: 00:31] An exploration of how Engineering Culture integrates Technical Leadership, Management, and Developer Productivity to optimize Teams through Agile and DevOps practices.
  • Rethinking growing engineers in the age of AI | Meri Williams | LDX3 London 2025 🔗 talk notes (Meri Williams) [AI, Engineering Career, Engineering Culture, Technical leadership] [Duration: 00:23] (⭐⭐⭐⭐⭐) Meri Williams explores the urgent need to rethink engineering growth in the age of AI, advocating for a shift from manual coding tasks toward early tech leadership, systems thinking, and a "healthy paranoia" that allows leaders to "surf" the waves of technological change rather than be overwhelmed by them.
  • The Biggest Problem With UI 🔗 talk notes (Dave Farley) [Continuous Delivery, Software Design, team topologies] [Duration: 00:15] This talk explains why UI/UX design should be treated as an integral software design choice rather than a static specification, advocating for development teams to own the UI to ensure the system accurately reflects the user's mental model.
  • Shaped by demand: The power of fluid teams | Daniel Terhorst-North | LDX3 London 2025 🔗 talk notes (Dan North) [Agile, Product Discovery, Teams] [Duration: 00:23] Daniel Terhorst-North presents demand-led planning as a framework for building fluid, autonomous teams that self-organize quarterly to balance delivery, discovery, and Kaizen based on real-time organizational demand.
  • Tidy First? A Daily Exercise in Empirical Design • Kent Beck • GOTO 2024 🔗 talk notes (Kent Beck) [Agile, Engineering Culture, Software Design, XP] [Duration: 00:57] Kent Beck explores software design as a socio-technical exercise in human relationships and economic optionality, offering a framework to balance feature delivery with a sustainable engineering culture rooted in Agile-XP principles
  • Rafa Gomez - Attacking tech Debt: A Marathon, Not a Sprint - SCBCN 25 🔗 talk notes (Rafa Gómez) [Architecture, Product, Technical leadership] [Duration: 00:41] Learn how the "Marathon" approach enables engineers to tackle technical debt by adopting a product mindset that aligns long-term technical health with business value and consistent product delivery.
  • An Ultimate Guide To BDD 🔗 talk notes (Dave Farley) [Continuous Delivery, Software Design, tdd] [Duration: 00:18] Dave Farley explains how Behavior-Driven Development (BDD) utilizes executable specifications to improve software design and collaboration, facilitating excellence in Continuous Delivery through a user-centric, outside-in approach.
  • Unit Testing Is The BARE MINIMUM 🔗 talk notes (Dave Farley) [Continuous Delivery, Software Design, tdd] [Duration: 00:20] Learn how Test-Driven Development (TDD) serves as a critical act of design to achieve high-quality software by specifying behavior over implementation and enhancing modularity
Reminder: All of these talks are interesting, even just listening to them.

You can explore all my recommended talks and podcasts on the interactive picks site, where you can filter by topic, speaker, and rating: Related: