Adopting Agile doesn’t mean forgetting what you’ve learned

Agile is particularly attractive to two very different groups: 1) those whose organizations don’t already have evolved practices, and 2) those whose processes have grown to become burdensome. In both cases, there is a tendency to make your first Agile sprints have a bare minimum process. After all, that’s what Agile says to do, right?…

Not right! It’s a common misconception, but minimum weight is not the controlling function for Agile adoption.

The controlling idea of Agile is learning (visibility and inspection/retrospection) and applying that learning (adaptation). Agile practices are predicated on the idea that trying to apply someone else’s process template to your situation is rarely ideal and often counterproductive. Rather, the agile approach is principled-based, allowing you to adapt as you learn and your situation changes.

So what does this mean? Well, if you currently have over evolved processes, don’t throw the baby out with the bath water. If your organization has experienced problems in the past and the situation hasn’t changed in a way that invalidates that “learning”, don’t necessarily throw out all process elements that resulted from those experiences. Similarly, if you are just getting started, don’t be afraid to learn from others who have gone before. In fact, using what you’ve learned is the foundation of the Agile approach.

Let me give you a concrete example. It’s no coincidence that every strong team that I’ve ever seen has mandated some means to get another set of eyeballs on the code. In some organizations, this means “formal inspections”. You may believe that formal inspections cost more than it’s worth for your situation and I won’t disagree with you, but even XP advocates pair programming. Open source development has the “many eyeballs” effect built into their licensing and  commit practices. For closed source practitioners, what I’m starting to see more is some form of asynchronous peer review using tools like Google Code Reviews.

Why do all these strong practitioners utilize some form of peer review? Because we’ve shown time and again, in various and sundry quantitative research, as well as in qualitative studies, that it’s the single most efficient thing we can do to remove defects from the code. In general, it’s more efficient than testing at removing all kinds of defects PLUS it allows you to systematically address things that testing cannot like maintainability/evolvability (code smell issues), as well as things that are nearly impossible to find with testing including certain kinds of security and concurrency issues. Why then do I see Agile teams going through their first few sprints without any form of peer review?

Similarly, there is a tendency to swing too far on the issue of design. Sure, I’m a firm believer in YAGNI and that the best way to improve the design is to evolve working code. I’ve suffered the paralysis of analysis. I’ve even been the guilty party. However, refactoring is not a substitute for design. It’s very difficult to achieve certain non-functional requirements (scalability, security, etc.) without some architecture work up front. Similarly, depending upon your situation, appropriate requirements elicitation and documentation practices can save much more than they cost.

The good news is that Agile supports doing these things in its “definition of done”. Furthermore, if you fail to do them, it provides the feedback loops that will highlight the need for them later. However, do yourself a favor, when you are trying to settle on your FIRST “definition of done”, include at least some form of peer review for your code.


This entry was posted in Software craftsmanship and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam Protection by WP-SpamFree