Q: I would be interested in anything you might have about rolling out Agile on a team that depends on components created by non-Agile teams. In particular, how that is affected by different approaches to Quality between teams in the same company (and no, we don’t have a common standard, only on paper). I was reading about the GM/Toyota experiment (first TPS plant in the US) and how GM had issues trying to roll it to other plants, one of the biggest one being the fact that unlike in Japan, they didn’t have the power to push their process down to their producers. They quickly found out that they could not build a Quality car without Quality components; and I am afraid we will find the same here.
A: The reason quality is generally higher with the output of Agile processes is related to the nature of feedback loops built into Agile. We feedback on the product/design much more rapidly. Practices like pair programming or lightweight peer review, automated testing, short iterations, automated build/continuous integration, and close collaboration with the customer/proxy, all tend to give us more feedback on the product/design… which tends to lead to higher quality. My recommendation would be to try to drive as much of those feedback loops up stream as possible. You don’t control their process but you may be able to influence it at the boundary between them and you.
Close collaboration. The lowest hanging fruit is probably the close collaboration with the customer one. In this instance, the Agile team is the customer. The non-Agile teams are the vendors. I’m thinking of setting up regular demo/review meetings (probably on the cadence of the Agile team – short iterations). You may also be able to visit (virtually or physically) on a near daily basis.
Automated testing. You might also try setting up automated testing at the interface level for the components delivered by the upstream teams. You’ll have to avoid the trap of using this as “contract negotiation over collaboration” but that is in how you handle it. The key here is that you want them to think of the tests as a tool to help them do their job as opposed to a way to enforce something. This means that they will need ability to run the tests before delivering to you. It would be better still if they owned the tests and you reviewed them. No matter who owns them, the tests become the specification for the API, which is a good Agile smell.
Peer review. At this point, you are collaborating/reviewing the test code. This might then lead to a situation where you might be able to do peer review of their production code. I’d prefer a peer review approach that helped them improve their code (and learn how to write better code in the future) over one that just allowed you to fix their code after the fact.
Automated build. If you were to give them access to your build process, they would also be able to test the compile-time agreement between their code and yours. This comes with two immediate benefits: (1) it serves as an additional automated test of of the interface, and (2) this (combined with the other automated tests) gives them more confidence to refactor their code and make improvements. The assumption here is that most teams know their code has warts but they are afraid to modify it to improve it because they are afraid of breaking code that depends upon it. Running your build script lowers the fear.
There is a third (and potentially more powerful) benefit to a shared build process. It provides you with a place to plug in other quality improving tests and analysis. The automated testing that I proposed above are tests that run against their upstream code. With an automated build, you could include tests that run against your downstream (but higher level) code. This means that they could see if changes that they make break your higher level functionality. You’d have to use a stable version of your source so they could be sure the problem was theirs but a distributed source control tool or careful branch management could overcome that obstacle. The build is also a common place to run automated bug finders like FindBugs or even custom analysis like a tool to highlight any changes in the calling signature.
Please let me know if any of this helped. Maybe I can refactor and improve my answer (upstream product) based upon your feedback (from downstream). 😉