Sunday, March 29, 2026

El libro que llevo 25 años intentando no tener que escribir

"Hoy, cuando la última moda es el AI-assisted coding, sonrío pensando que los principios que aprendí hace diez años siguen siendo los mismos." — del prólogo, por una ingeniera que trabajó en el equipo

Durante más de dos décadas he explicado las mismas ideas a los mismos tipos de equipos. Equipos distintos, empresas distintas, contextos distintos. Y sin embargo, el patrón se repetía con una regularidad que ya no puedo ignorar.

Equipos talentosos. Equipos capaces. Tecnología razonable. Y aun así: la sensación de correr cada vez más fuerte para avanzar cada vez menos.

Lewis Carroll lo describió mejor que yo en A través del espejo: "Aquí, como ves, se requiere correr todo cuanto se pueda para permanecer en el mismo sitio." Durante años pensé que esa metáfora era exagerada. Ya no.

Parte del problema es cómo pensamos sobre el software. Decimos que lo "construimos", como si fuera algo que se termina y queda ahí. Pero el software no se construye. Se cultiva. Es un sistema vivo que crece, cambia y se degrada. Y los sistemas vivos necesitan atención continua, no solo construcción.

He llegado a la conclusión de que lo más honesto que podía hacer era escribirlo.

De qué va el libro

"Menos software, más impacto" tiene un subtítulo que no deja mucho a la imaginación: Cómo evitar que tu equipo colapse bajo el peso de su propio código.

La tesis central es incómoda: el mayor problema de la mayoría de los equipos no es que escriban código malo. Es que escriben código de más.

El software existente consume recursos continuamente, lo uses o no. Cada funcionalidad añadida, cada integración, cada decisión de diseño que se acumula sin revisión tiene un coste que no aparece en ningún roadmap pero que aparece cada día. Lo llamo el coste basal del software, en analogía al metabolismo basal de un organismo: el gasto mínimo para seguir funcionando. Y como el metabolismo, si no se gestiona, crece hasta consumir toda la energía disponible.

El libro recorre cuatro grandes bloques:

  • Fundamentos: qué es el Lean Software Development y por qué el coste basal es el concepto central que lo conecta todo
  • Los cinco principios: eliminar desperdicios, amplificar el aprendizaje, decidir en el último momento responsable, entregar lo antes posible, empoderar al equipo
  • Calidad sostenible: por qué la calidad no es el enemigo de la velocidad sino su única base duradera
  • Pensamiento sistémico: optimizar el todo, integrar Lean con XP y mentalidad de producto, y qué pasa si no haces nada

Son 192 páginas. Basadas en más de 25 años de experiencia en equipos reales: Alea Soluciones, The Motion, Nextail, Clarity AI. Con casos concretos, conflictos reales y errores propios reconocidos. El libro incluye también las perspectivas de ocho profesionales que han vivido estas transformaciones desde dentro, en distintos roles y contextos.

Para quién es (y para quién no es)

Este no es un libro para quien quiere mejorar su código de forma individual. Hay libros excelentes para eso y este no es uno de ellos.

Es para quien toma decisiones sobre qué se construye, qué no se construye y qué se elimina. Engineering Managers, Tech Leads, Product Managers, CTOs. Cualquier persona con responsabilidad directa sobre la capacidad de un equipo a seis meses, un año, tres años vista.

Si tu día a día es decidir prioridades, gestionar capacidad y negociar alcance, lo que viene en el libro te va a resultar familiar. Y probablemente incómodo. Esa es la intención.

Por qué ahora

Hay mucha literatura sobre Lean, XP y Agile en inglés. En español, menos de la que debería haber. Y casi ninguna que combine los tres enfoques de forma integrada, con casos reales de equipos que conozco de primera mano.

Además, el contexto actual lo hace más urgente. La aceleración que trae la IA hace que las decisiones sobre qué construir y qué no construir sean más importantes, no menos. Amplificar la capacidad de un equipo que ya construye demasiado no resuelve el problema. Lo acelera.

El borrador está completo. Ahora viene la revisión, la edición y la preparación para publicación. Si te interesa estar entre los primeros en leerlo, escríbeme: eferro@eferro.net

Sunday, March 01, 2026

Encoding Experience into AI Skills

I'd been tweaking my augmented coding setup for months - adjusting CLAUDE.md rules, adding instructions for testing discipline, complexity management, incremental delivery. Things I've repeated to every team I've worked with, now repeated to AI agents. It worked, but it felt like writing the same email over and over.

Then I found Lada Kesseler's skill-factory.


What Skills Are (And Why They Matter)

If you use Claude Code, you already know about CLAUDE.md - a file where you put instructions that the agent reads at the start of every conversation. It works. But it has a problem: everything is always loaded. Your TDD guidelines, your Docker best practices, your refactoring workflow - all of it competing for the agent's limited context window, whether it's relevant or not.

Skills solve this differently. They're packaged knowledge that activates only when relevant. You type /mutation-testing and the agent gains deep expertise about finding weak tests through mutation analysis. You type /complexity-review and it becomes a technical reviewer that challenges your proposals against 30 dimensions of complexity. The rest of the time, that knowledge stays out of the way.

Think of it as progressive disclosure for AI context. The agent gets what it needs, when it needs it.

The Discovery: Lada Kesseler's Skill Factory

Lada Kesseler built the skill-factory - a repository with 315 commits of carefully crafted skills covering serious engineering ground: TDD, Nullables (James Shore's pattern for testing without mocks), approval tests, refactoring (using Llewellyn Falco's approach), hexagonal architecture, event modeling, collaborative design, and more.

These aren't toy prompts. The Nullables skill alone includes reference material for infrastructure wrappers, embedded stubs, output tracking, and three different architectural patterns. The approval-tests skill covers Java, Python, and Node.js with scrubbers, reporters, and inline patterns. This is deep, carefully structured knowledge.

Lada also co-created augmented-coding-patterns - a catalog of 43 patterns, 14 obstacles, and 9 anti-patterns for working effectively with AI coding tools. It's a collaboration between Lada Kesseler, Ivett Ordog, and Nitsan Avni. If you're doing augmented coding and haven't seen it, stop reading this and go look.

What I found wasn't just a collection of skills. It was an approach to sharing engineering knowledge with AI agents that I hadn't seen anywhere else.

The Fork as Extension

The natural next step wasn't to start from scratch - it was to fork and extend. Lada's skills already covered testing fundamentals, design patterns, and AI-specific workflows. What I noticed missing were the practices I kept explaining repeatedly: how to manage complexity, how to deliver incrementally, how to make sure tests actually catch bugs.

So I added 11 skills. Not because 16 wasn't enough, but because my particular set of problems needed particular solutions.

You can find my extended fork at github.com/eferro/skill-factory with all 27 skills ready to use.

Testing rigor

test-desiderata - Kent Beck's 12 properties that make tests valuable. Not "does this test pass?" but "is this test isolated? composable? predictive? inspiring?" I was tired of AI generating tests that had coverage but no diagnostic power. This skill makes the agent evaluate tests against each property and suggest concrete improvements.

mutation-testing - The question code coverage can't answer: "Would my tests catch this bug?" Coverage tells you what your tests execute. Mutation testing tells you what they'd detect. I'd already written a blog post about this - now it's a reusable skill. The examples are in Python and JavaScript, but I'm also using it successfully with Go.

Delivering incrementally and managing complexity

This is where the skills chain together, and where things get interesting.

story-splitting - Detects linguistic red flags in requirements ("and", "or", "manage", "handle", "including") and applies splitting heuristics. It's the first pass: is this story actually three stories wearing a trenchcoat?

hamburger-method - When a story doesn't have obvious split points but still feels too big, this skill applies Gojko Adzic's Hamburger Method: slice the feature into layers, generate 4-5 implementation options per layer, then compose the thinnest possible vertical slices.

small-safe-steps - The implementation planner. Takes any piece of work and breaks it into 1-3 hour increments using the expand-contract pattern for migrations, schema changes, API changes. Core belief: risk grows faster than the size of the change.

complexity-review - My inner skeptic, encoded. Reviews technical proposals against 30 dimensions of complexity across 6 categories (data volume, interaction frequency, consistency requirements, resilience, team topology, operational burden). Pushes for the simplest viable approach. Use it when someone says "Kafka" and you want to ask "why not a queue?"

code-simplifier - Reduces complexity in existing code without changing behavior. The cleanup crew after a feature is done.

These five skills work as a pipeline: story-splitting -> hamburger-method -> small-safe-steps for delivery planning, with complexity-review as a gate before implementation and code-simplifier as a sweep after.

Practical tools and team workflows

thinkies - Kent Beck's creative thinking habits, turned into a skill. When you're stuck, it applies patterns like "What would I do if I had infinite resources?", "What's the opposite of my current approach?", "What would make this problem trivial?" It's less about code and more about unsticking your thinking.

traductor-bilingue - Technical translation between English and Spanish that keeps terms like "deploy", "pull request", "pipeline", and "staging" in English (because that's how Spanish-speaking dev teams actually talk). Small thing, but it saves constant corrections.

dockerfile-review - Reviews Dockerfiles for build performance, image size, and security issues.

modern-cli-design - Principles for building scalable CLIs: object-command architecture (noun-verb), LLM-optimized help text, JSON output, concurrency patterns.

A Skill in Action

To make this concrete, here's what the delivery planning pipeline looks like in practice.

Say you have a story: "As a user, I want to manage my notification preferences including email, SMS, and push notifications with scheduling and quiet hours."

Step 1 - You invoke /story-splitting. The agent immediately flags "manage", "including", and the conjunction "and" joining three notification types plus scheduling. It suggests splitting into at least 4 stories: one per notification channel plus quiet hours as a separate slice.

Step 2 - You take the first slice ("email notification preferences") and invoke /hamburger-method. It breaks the feature into layers (UI, API, business logic, persistence) and generates options for each. For the UI layer: (a) full settings page, (b) single toggle, (c) link to email with confirmation, (d) inline in profile. It composes the thinnest vertical slice: a single toggle with an API endpoint and a database flag.

Step 3 - You invoke /small-safe-steps on that thin slice. It produces a sequence of 1-3 hour steps: add the database column with a migration, add the API endpoint with tests, add the UI toggle, wire it together. Each step deployable independently.

No single skill does everything. They compose. That's the point.

How to Get Started

If you want to try these:

  1. Fork the repo: github.com/eferro/skill-factory (my extended fork with 11 additional skills for complexity management and incremental delivery) or the original by Lada Kesseler
  2. Install skills: The repo includes a skills CLI tool. Run ./skills toggle to browse and select which skills to install into your Claude Code setup.
  3. Use them: Type /skill-name in Claude Code. /mutation-testing to check your tests. /complexity-review to challenge a design. /small-safe-steps to plan your next implementation.
  4. Make your own: The repo includes documentation and tooling for creating new skills. Fork it, add what you need, share it back.

Standing on Shoulders

The total is 329 commits, 27 skills across 6 categories. But the number that matters most is that Lada built 315 of those commits. I added 14. The original structure, the skill manager, the testing and design skills that form the foundation - that's all her work. What I did was extend it with the practices I personally find myself repeating.

This is how open source has always worked: someone builds something good, others extend it, and the whole thing becomes more useful than any individual could make it. With AI skills, the effect compounds differently - every skill that gets shared becomes available to every person using it, making good practices almost free.

Lada's augmented-coding-patterns site (with Ivett Ordog and Nitsan Avni) takes this even further - it's not just tooling but a shared vocabulary for how we work with AI. Skills, patterns, obstacles, anti-patterns: a growing body of community knowledge.

What knowledge do you find yourself repeating to your AI agents? What practices would you encode as skills?

The barrier to sharing isn't technical anymore. It's deciding to do it.

References