Top 10 questions when using Agile on hardware projects


Recently, I have had the chance to work closely with a number of projects that were not pure software. They all had some software or firmware component but they also included an electronics or even mechanical design aspect. Below are the top ten questions I recorded when working with these teams and the answers on how the teams effectively answered them.

1.    Are Agile practices and processes effective when conducting non-software projects (firmware, electronic, mechanical, etc.)?

Absolutely. Some of the XP engineering practices are not directly applicable or need to be modified based upon industry or your particular situation. However, surprisingly minor adjustments are all that is necessary for the Scrum process framework to be highly effective… even compared to processes that evolved specifically for hardware.

2.    What adjustments need to be made to make to the Scrum process framework work well for these projects?

Surprisingly few. The primary adjustments center around expectations in two general areas: (1) minimal marketable feature/emergent design/thin vertical slices, and (2) user stories.

3.    What adjustments need to be made to our expectations around minimal marketable feature, emergent design, and thin vertical slices?

Focus on feedback. True, the break even point between thinking versus building encompasses more thinking for non-software projects. Even so, push to build something sooner and even when you are working on early design and infrastructure, get feedback on some “product” each iteration… and get that feedback from a source as close to the user as possible. In software, it’s almost always less expensive and more effective to build something rapidly, get feedback on it from an actual user, and change it, than it is to think longer about how to build it expecting to avoid later rework. This is the primary reason why Agile practices encourage you to break up the work into increments of minimal marketable features (MMF). Agile software projects try to build as little infrastructure as necessary to implement the current MMF and let the design emerge from those increments rather than nail down all the requirements and design up front. You are encouraged to build the system in thin vertical slices where all levels of the product experience changes with each increment.

When confronted with this idea, even software-only teams push back. Software teams want an architecturally sound base upon which to build their features. However, I have found that if the team thinks on it for a bit, they can often find a way to build usable features and start to get feedback much sooner than the team originally imagined. Those projects are able to deliver a marketable feature even in the first few iterations and they move rapidly to a mode where very little of each iteration is spent on infrastructure. The primary difference for hardware projects is that you have a different cost structure for manufacturing so the break even point for thinking versus building encompasses a bit more thinking. This means that it may take longer to get into the mode of designing several MMFs each iteration.

Remember, the primary benefit of this approach is to get the most valuable possible feedback as often as possible. So even when it is hard to use thin vertical slices to accomplish that, you should still seek out opportunities to enhance the richness and frequency of the feedback you receive by producing something to get feedback upon during EVERY iteration. The next step down from demonstrating an MMF is producing a prototype but even that can be hard to do every iteration of a hardware project. So when the thing you produce is only a document, a design, or an experiment, make an effort to maximize the value by choosing to get your feedback from a source as close to the customer as possible.

4.    What adjustments to our expectations need to be made around user stories?

Understand that the “user” in user stories only hints at one of four good reasons to manage requirements with user stories… and it’s not the most important reason, “conversations.” Maybe they should be called “conversational stories.” The big hang-up of hardware teams managing requirements with user stories focuses around the word “users”. That’s unfortunate because I don’t think that is the most important benefit that Agile teams (even software teams) get from the practices surrounding user stories.

The benefits come from four aspect: (1) WHO, (2) WHAT, (3) WHY, and (4) conversations. This last one, conversations, is the most valuable so I’ll talk about it first.

Even when drafts of requirements documents are shared with the development team for early feedback, the development team doesn’t internalize them until they start working with them. By having the team size, and often write user stories, you force them to start this internalization much sooner. The early conversation between the development team and stake holders around the requirements allows for implementation costs to be factored into requirements tradeoff decisions. The ongoing conversation throughout the project lifecycle, provides a high-fidelity communication channel and continuous vision alignment.

Now, let’s address the other three beneficial aspects one at a time. The traditional user story format is, “As a WHO, I want to WHAT, so that I can WHY.” All three of these, WHO, WHAT and WHY, provide benefits. The tension of trying to always make the WHO be an end user is not unique to hardware. Even in the most Agile of software-only teams, there are times when the end user is only indirectly the beneficiary of a particular backlog item. For instance, the most direct beneficiary of a research activity, a mockup, or a prototype (collectively referred to as a “spike” in the agile world) is the development team and not the end-user. In those cases, specifying the WHO does not encourage you to think about the product from the end user’s perspective. If every user story were this way, then we wouldn’t call them “user stories”. The “user” is in the phrase to remind us to get the user perspective involved as often and as soon as possible but just like MMFs, this practice is harder to do as often especially early in a hardware project.

I will not dwell on the WHAT because this element is present in all approaches to requirements management, except to mention that it is important for the what to not drift over into the HOW so the development team has flexibility in how they meet the identified need. Note: The Rally tool’s entity for “user stories” is really more of a generic backlog item. There is nothing in the tool that enforces or even makes awkward, the use of this entity in a traditional work breakdown mode.

On the other hand, the WHY is somewhat unique to the practice of user stories and can be very valuable. Understanding why someone wants something empowers the development team to be creative about satisfying a need… sometimes even by explicitly not satisfying the WHAT of the user story.  If a team is told that it must implement a data layer with sub-millisecond response, they may blindly go about accomplishing that… at great cost. My first response to a user story written like that is that it crossed the border from WHAT and into the realm of HOW. Nevertheless, even if you give a team a user story like that but you also tell them that the reason for this “requirement” is the responsiveness of the user interface, they may take steps to provide low latency to user input even when the data does not make it all the way to the data layer for a second or more… saving cost AND improving the product.

5.    What about prioritizing user stories strictly by value to the end user?

Prioritize by overall business value, not end user value. Even in software-only projects, user stories should be prioritized by overall value to the business, not the end user. Often that is the same thing and certainly, the end user’s needs are the biggest factor in prioritizing any user story with an end-user as the WHO. However, a feature that is desirable to the end user but not saleable might not be valuable to your business. Similarly, valuable features that are too costly (either to produce or as tradeoff for against other desirable features) might not be good decisions.

Apple has been criticized for excluding multi-tasking from the iPhone. They realized that multi-tasking negatively impacted battery life and user interface responsiveness and explicitly left it out of product. They made a business decision that they could still sell the iPhone even without this high profile feature.

However, before they made this decision, they needed some information. How much did background tasks hurt battery life and responsiveness? How amenable will potential customers be to purchasing a product without it? Apple can easily justify investments in research to determine the extent of this impact on both the usability and marketability of the product. This information is of no direct benefit to the end-user but the work necessary to gather it, was of immense benefit to the business.

Development projects of all kinds benefit from good design and marketing decisions. Backlog items focused solely on these outcomes are of value to the business and should get appropriate prioritization. Similar to the above discussion on MMFs and the WHO in user stories, it just may be that non-software projects experience more of these tradeoff analysis backlog items early in the project and they keep seeing them longer into the development cycle.

6.    Should user stories be our only tool for requirements management?

Not usually for hardware/mixed projects. There are many reasons why you might want some other mechanism to compliment your user story practice. For instance, the concept of abuse cases is often part of a larger security review. Safety reviews often have a parallel mechanism. Protocols and other interfaces are best defined by other means. Hardware typically have requirements associated with the operating environment. Etc.

7.    But user stories are not even an official requirement of Scrum so why shouldn’t we just use our traditional requirements practices?

Consider alternatives but remember all four valuable aspects of user stories. It is true that the official definition of Scrum simply calls for there to exist a backlog of work. It only mentions user stories in a sidebar and even then, the sidebar also mentions other approaches like use cases. The essence of Agile is (1) self-organize, (2) do something, and (3) inspect and adapt. The definition of Scrum is just one step more detailed than this essential definition of Agile and is intentionally minimalistic so any iterative agile approach would fit.

User stories have emerged as a common and valuable practice because of the reasons mentioned above but it is not strictly required. Your team should feel empowered to consider alternatives.

However, if your team chooses another approach to doing requirements management, you should not deviate from the agile practice of allowing the development team to do the estimating. Also, I encourage you to think about the reasons the practices surrounding user stories are valuable (other than the emphasis on the user) as described above and enable as much of those reasons as possible, starting with the conversation aspect.

8.    What about when we need to send a board (or prototype part) out for manufacturing and it will not be done within an iteration?

Push for rapid prototyping but adapt to your capability.
This is a very specific question that comes up often when folks are told that they need to produce something upon which to get feedback during each iteration. What if the time it takes to get prototype parts back from manufacturing is longer than an iteration?

My first response is to ask yourself, “Is there ANYTHING that we can do so that we CAN produce a prototype in a iteration?“ The world of prototyping has attempted to keep up with the ever-increasing pace of change. There now exist component suppliers that allow you to upload a part design in the morning so that they can produce and ship it overnight. Those services are expensive but so is the time of your team. Failing some solution like that, “Is there a different way to produce something to get the answers and feedback we need for  decisions within a single iteration?”

If you still cannot think of a way to produce it within one sprint, you can handle it by breaking the backlog item down. The first portion includes whatever work is necessary to place the order for the part. The later portion includes any evaluation activities. Collectively, they have value to the business.

9.    What about dependencies and critical path analysis?

Supplement when needed but ask if it is really needed. Dependencies are considered by the product owner and the development team when choosing stories for a particular iteration. However, the consideration of dependencies is informal and not explicit like in a Gantt chart format (think Microsoft Project). I have worked with teams where explicitly and continuously conducting this sort of critical path analysis is…. well…  critical to their success; but I have worked with many more teams where the use of a Gantt chart is merely the default and what they are used to. For those projects, the most important thing is for each team member to know what they should be working on right now and have a sense of urgency about getting it done-done! The mechanisms in the Scrum framework are highly effective at accomplishing this. If you do need to conduct critical path analysis at some point, I suggest that you do it only as needed.

Note: The Rally tool includes functionality for you to record dependencies so that they are readily available when you are making decisions about what to work on next.

10.    Maybe we don’t need continuous critical path analysis, but we still have specialists that are not permanently dedicated to the team. How do we deal with that?

Favor cross-training and using generalist team members but fall-back to explicit allocation and coordination when necessary. The Agile approach is to fully dedicate as many of these specialists to the team as possible. Even when you know it’s not a full-time job for a particular specialty, it still might be better to supplement those specialists’ workloads with team tasks that are outside of their specialty. We find that becoming Agile tends to encourage more generalists (or at least multi-specialists) to emerge. This cross-training is generally positive on its own merit but double so when you factor in the cost of task switching and the productivity befits you get once a team learns how best to work together (think Forming-Storming-Norming-Performing).

Even so, there may still be some centralized functions that your teams will need to consult. It is often possible to handle these situations by leveraging the team’s approach to dealing with outside suppliers.

When you move the solid line from a functional manager to a team lead and make the functional manager the dotted line, it will bring up many issues like personnel reviews and career counseling. The coaches at Rally have experience with companies making these transitions and can help you with those tough issues but you will have to work through them. “Agile is easy. Implementing Agile is a bit more difficult.”

 
Posted in Software craftsmanship | Tagged , , , | 2 Comments

A multiple file loader for Flex/Flash/ActionScript 3 (AS3)


The URLMultiLoader class in this library will load multiple files and optionally “process” them before calling the method specified for Event.COMPLETE. Since file loading in the Flash/Flex/AS3 world is completely asynchronous, when you need to load more than one file, the hackish solution is to make the COMPLETE handler for the first one initiate the load for the second, etc. until all the files are loaded. URLMultiLoader will allow you to setup one COMPLETE handler which will not be called until all the files you specify are loaded (and optionally “processed”).

When I first had need for this, I said to myself that someone must have done this before. It seems like a fairly common need. However, when I went looking, I couldn’t find something that fit the bill, so I decided to write my own. It was actually a very good way to get familiar with the event system. Also, while I was at it, I figured I’d allow the injection of a processor for each file and make sure that got processed before proceeding. If anyone knows of another tool like this please post a link to it in the comments. Actually, it wouldn’t surprise me if this functionality is built into the Flex framework somewhere and I just missed it.

Update: The functionality mustn’t be in Flex because I have now found several other similar controls:

Mine is relatively simple compared to some of these. BulkLoader seems particularly featureful. It has bandwidth stats and progress indicators. For my loading needs, the sizes were small enough that I wasn’t worried about progress or bandwidth, but I may update mine to include these features in the future.

One feature that mine has that many do not have is the optional ability to inject in a an IDataProcessor that will pre-process your data before returning it to you.

DataProcessorXMLStringToArray is provided as an example IDataProcessor that can optionally be passed in when adding a new URLRequest to the queue. If provided, an IDataProcessor will convert the raw file string (or binary, or url variables) into some other form before returning. Complete documentation for DataProcessorXMLStringToArray is provided in the ASDoc header for the class but it is offered here primarily as an example. You can easily create your own and inject them when setting up the URLMultiLoader. You just need to follow the IDataProcessor interface which has one method with the following signature:

function processData(data:*):*

Remember, the processor is totally optional. If omitted, URLMultiLoader will simply copy the file contents into it’s output data field. The type of the data in that case will depend upon the URLLoaderDataFormat: String for TEXT (default), ByteArray for BINARY, and URLVariables for VARIABLES.

Let’s see it in action.

package
{
	import com.maccherone.urlmultiloader.*;
	import com.maccherone.json.JSON;  // Only used for pretty output
 
	import flash.display.Bitmap;
	import flash.display.Loader;
	import flash.display.Sprite;
	import flash.events.Event;
	import flash.events.IOErrorEvent;
	import flash.net.URLLoaderDataFormat;
	import flash.net.URLRequest;
 
	public class URLMultiLoaderTest extends Sprite
	{
		private var urlMultiLoader:URLMultiLoader = new URLMultiLoader()
		private var baseURL:String = "data/"
		private var urlRequest1:URLRequest = new URLRequest(baseURL + "file.xml")
		private var urlRequest2:URLRequest = new URLRequest(baseURL + "file.xml")  // Same file but we'll get it in a different format
		private var urlRequest3:URLRequest = new URLRequest(baseURL + "smile.gif")
 
		public function URLMultiLoaderTest() {
			var urlMultiLoader:URLMultiLoader = new URLMultiLoader
 
			var dataProcessor:IDataProcessor = new DataProcessorXMLStringToArray()  // Example provided with URLMultiLoader. You can create your own.
 
			urlMultiLoader.addURLRequest("Request1", urlRequest1, dataProcessor)
			urlMultiLoader.addURLRequest("Request2", urlRequest2)  // If no IDataProcessor is provided, then file's contents is returned as String, ByteArray, or
			                                           // URLVariables depending upon the URLLoaderDataFormat TEXT, BINARY, or VARIABLES respectively
			urlMultiLoader.addURLRequest("Request3", urlRequest3, null, URLLoaderDataFormat.BINARY)  // Loads smile.gif as a ByteArray
 
			urlMultiLoader.addEventListener(Event.COMPLETE, filesLoaded)
			urlMultiLoader.addEventListener(IOErrorEvent.IO_ERROR, onError)
			urlMultiLoader.load()
		}
 
		private function filesLoaded(event:Event):void {
			var data:Object = (event.target as URLMultiLoader).data
			trace("Array of Objects:\n" + JSON.encode(data["Request1"], true) + "\n") // Uses JSON.encode for pretty output
			trace("String of file contents:\n" + data["Request2"] + "\n")
			var loader:Loader = new Loader();
			loader.loadBytes(data["Request3"]);
			this.addChild(loader)  // Displays smile.gif in Flash player
		}
 
		private function onError(event:Event):void {
			trace(event)
		}
	}
}

Assuming you put the file.xml and the smile.gif , in a data/ folder below the bin-debug directory and you have the correct security settings, the above code would result in the following output:

Array of Objects:
[
    {"id": 101, "name": "/db/node/visitor"},
    {"id": 102, "name": "/db/node/observer"},
    {"id": 103, "name": "/ui/button"}
]
 
String of file contents:
<?xml version="1.0" encoding="UTF-8"?>
<root>
  <file>
    <id>101</id>
    <name>/db/node/visitor</name>
  </file>
  <file>
    <id>102</id>
    <name>/db/node/observer</name>
  </file>
  <file>
    <id>103</id>
    <name>/ui/button</name>
  </file>
</root>

Plus it will display smile.gif in the Flash player like this:

smile_in_flash_player

You can download it from here.

Update: I altered the URLMultiLoader to use a string as the key to retrieving the data after the loading is complete. A previous version used the URLRequest as the key for Dictionary object. This version does not depend upon Dictionary.

 
Posted in Code, Flex/Flash/ActionScript | Tagged , , | Leave a comment

ActionScript 3 (AS3) JSON encoder with “pretty” output by adding linefeeds and spaces


I’m sure many ActionScript 3 or Flex developers have used the as3corelib for one reason or another. It’s a wonderful little library with lots of useful functionality. I’ve frequently used its JSON  encoding and decoding functionality. It works great, however, it doesn’t add spaces or linefeeds to make the resulting JSON string more readable. That’s fine if you are just serializing something to send over the wire but not if you want it to render something that is easily read by a human. In my case, I want a user to actually be able to edit the resulting JSON. To make this workable, I needed a JSON encoder that would add appropriate linefeeds and spaces. Rather than write my own, I simply adapted the one in as3corelib.

One side benefit of having done this is that you can now get JSON serialization without getting the entire as3corelib library.

The default interface is identical to the one in as3corelib. If you just call JSON.encode(my_object), it will behave almost exactly like the one in as3corelib. I say “almost” because my version add a space after each “:” and “,” even in default mode. Update: I’ve changed it so it behaves exactly like the serializer in as3corelib so no extra spaces are added unless you use the optional parameters described below.

If you want linefeeds and truly “pretty” output, you can add an optional second parameter, like so JSON.encode(my_object, true). This will cause any array [ ] or object { } that would be longer than 60 characters to wrap to newlines, which works out about right for my purposes.

You can also adjust the maximum line length with an optional third parameter, like this JSON.encode(my_object, true, 10). This will cause any line above 10 to wrap. If you want every array [ ] and object { } to wrap, just use any number 2 or lower in this third parameter. If you want it to wrap everything but empty objects or arrays, use 3 for this parameter.

Let’s see it in action.

package
{
    import com.maccherone.json.JSON;
    import flash.display.Sprite;    
 
    public class Tester extends Sprite
    {
 
        public function Tester()
        {
            var obj1:Object = {
                commit: {file: "commit.xml"},
                commit_detail: {file: "commit_detail.xml"},
                file: {file: "file.xml"},
                person: {file: "person.xml"},
                count: [1, 2, 3]
            }
            trace("Just like as3corelib (no line feeds or spaces):\n" + JSON.encode(obj1) + "\n");
            trace('"Smart" linefeeds:\n' + JSON.encode(obj1, true) + "\n");
            trace("Only allow short lines:\n" + JSON.encode(obj1, true, 10) + "\n");
 
            var obj2:Object = {
                "glossary": {
                    "title": "example glossary",
                    "GlossDiv": {
                        "title": "S",
                        "GlossList": {
                            "GlossEntry": {
                                "ID": "SGML",
                                "SortAs": "SGML",
                                "GlossTerm": "Standard Generalized Markup Language",
                                "Acronym": "SGML",
                                "Abbrev": "ISO 8879:1986",
                                "GlossDef": {
                                    "para": "A meta-markup language, used to create markup languages such as DocBook.",
                                    "GlossSeeAlso": ["GML", "XML"]
                                },
                                "GlossSee": "markup"
                            }
                        }
                    }
                }
            }
            trace("A bigger example from JSON.org:\n" + JSON.encode(obj2, true));
        }
    }
}

The above code would result in the following output:

Just like as3corelib (no line feeds or spaces):
{"file":{"file":"file.xml"},"commit":{"file":"commit.xml"},"commit_detail":{"file":"commit_detail.xml"},"person":{"file":"person.xml"},"count":[1,2,3]}
 
"Smart" linefeeds:
{
    "file": {"file": "file.xml"},
    "commit": {"file": "commit.xml"},
    "commit_detail": {"file": "commit_detail.xml"},
    "person": {"file": "person.xml"},
    "count": [1, 2, 3]
}
 
Only allow short lines:
{
    "file": {
        "file": "file.xml"
    },
    "commit": {
        "file": "commit.xml"
    },
    "commit_detail": {
        "file": "commit_detail.xml"
    },
    "person": {
        "file": "person.xml"
    },
    "count": [1, 2, 3]
}
 
A bigger example from JSON.org:
{
    "glossary": {
        "GlossDiv": {
            "GlossList": {
                "GlossEntry": {
                    "GlossSee": "markup",
                    "GlossTerm": "Standard Generalized Markup Language",
                    "ID": "SGML",
                    "GlossDef": {
                        "para": "A meta-markup language, used to create markup languages such as DocBook.",
                        "GlossSeeAlso": ["GML", "XML"]
                    },
                    "Abbrev": "ISO 8879:1986",
                    "Acronym": "SGML",
                    "SortAs": "SGML"
                }
            },
            "title": "S"
        },
        "title": "example glossary"
    }
}

Note that (unlike XML) the order of the elements in a JSON object { } is indeterminant. Of course the order of an array [ ] is preserved.

You can download it from here.

 
Posted in Code, Flex/Flash/ActionScript | Tagged , , , | Leave a comment

Measuring Craftsmanship


I’m on board with the Agile approach to software development and I have a strong history with process approaches to improvement (ISO-9000, CMM, CMMI, TSP, etc.). That said, I have always believed that the quality of the people doing the work is the biggest factor of success. The software estimation technique, COCOMO reveals this because “personnel attributes” dominates just about all COCOMO estimation models.

I think David Starr over on Elegant Code hits the right note in pointing out how this manifests itself in his post on Measuring Craftsmanship. His post starts with a distraction by arguing against the idea of measuring Agile maturity. I say distracting because I think it’s possible to agree with his message about the importance of craftsmanship no matter how you feel about the idea of creating an Agile Maturity Model.

The emphasis needs to always be on the people doing the work. In particular, I like his idea of “picking a guild” as a source for your skills criteria. Because of the way the software industry works, each organization might be its own guild, so I think it will be hard to agree upon the list of guilds and find a set of criteria most appropriate for each of them, but I think it is possible to create a map between certain practices and required skills.

For instance, if you are going to rely upon refactoring and emergent design, you better have strong design patterns skills. Same goes for tool usage. The practice of continuous integration requires build tool skills. Your team’s approach to software assurance also dictates the skills you need. If you rely heavily upon automated testing, then you need skills with automated testing tools and patterns that enable design for testability. If you rely more upon inspecition, mastering the skills of Spineliis Code Quality and Code Reading should be expected.

 
Posted in Software craftsmanship | Tagged , | 1 Comment

Ports, Components, and Connectors: the next great abstraction?


My friend George Fairbanks is trying to make the case that the abstractions provided by ports, connectors, and components are the next escalation in the war against complexity and scale.

George,

First, the good news…

Unfortunately (or maybe fortunately for you), many start Scrum without good engineering practices. I’m planning a talk with Noopur Davis at Agile2009 about this. Excerpt: “Simon, Struggling Agilist. Simon and his team have been using SCRUM and some agile practices for several iterations. They started out with great enthusiasm, but are now struggling as estimates are not improving, quality problems persist, and technical debt accumulates. Simon and his team want specific guidance on what they need to change.” We have data on a team that started with Scrum and added engineering practices of design, design review and code review. Their velocity went up and their defects/KLOC numbers improved significantly.

There is a tidal wave of folks who started Scrum without good engineering practices and now realize that they need them. You might try to ride that wave. Your best shot is with the Scrum folks who are now struggling. Lots of other folks are advocating more requirements and design work (iteration zero). You have another take on that.

Now the bad news…

Agilest will argue that feedback from actual running code is the best way to battle complexity and scale. Tighten the feedback loop with actual code. You can make more rapid progress by building something, even the wrong thing, and re-building it, than you can by wasting time modeling it and thinking about it because the assumptions you make when thinking about it always miss something important. You’ll call that something an insignificant detail. They’ll say, it’s the details that end up getting you.

I think you will lose the argument if someone comes up with a story about how Google’s (substitute, amazon web services, apache, or some other well known “big” system) massively scalable (although arguably not complex) system was created iteratively without any serious architecture work. Then, your approach to architecture is no longer design; it’s archeology, a concise way to document a system after a real engineer built it. I suspect that Eclipse was built with a more structured approach. Do you know if they followed an architecture approach?

Another way to lose the argument is by frameworks. I know architects whose job is mostly done once they decide what framework to use. Can you make the case that your approach could be used for them to make this decision? Be careful, most folks make the choice based upon things like how little boilerplate it makes you write, and the syntax of the templating language (Exhibit A). I see tons of discussion and thought go into the difference between SQLAlchemy and Hibernate’s approach to ORM versus RoR’s or Django’s. Can you address these things with your approach?

Alternatively, can you make the case that your approach could help in designing these frameworks? If that’s your goal, you limit your audience terribly and I’m still not convinced. Because for most design issues, I think most developers will prefer to use object-oriented terms and supplement that with OO design pattern terms when necessary. For instance, I recently worked on a team where we had to expand our system to support a remote service for storage (Amazon S3), whereas the system currently supported a local embedded storage. The conversation we had went something like this:

  • Developer A: Let’s just put a remote proxy in front of the embedded storage interface with the same methods as the current embedded storage API. We can block on calls to S3.
  • Developer B: Yeah, and if that’s too slow, we can add pre-fetch to the remote proxy.
  • Developer C: I disagree. By doing that, any code calling this might not anticipate the delay. The more general model is the async one. Let’s bite the bullet now and implement a pub-sub “observer” eventing system and create our own internal async interface for storage. Let’s look at Dojo’s storage API for ideas on generality. Then when using the local storage, we’ll just immediately trigger the callback event.

We all agreed with Developer C and that’s what we did. How would you address this conversation with ports, connectors, and components? Is that approach any more concise or revealing? If your response is that you are really targeting higher level issues, then you’ve greatly limited your audience because (other than deciding what stack/framework to use) this scenario is and example of the highest level issues that most developers deal with.

What about massively parallel designs? Can you help there?

I’m not sure what the right answer is for you. I’ve never been much of a believer myself. I hate to say that without reading the other draft chapters. I’ll probably do that some time but I don’t have the time right now.

Maybe if you started with the patterns/styles and thought of the connectors, ports, and components as merely ways to express the styles, that would be more palatable to the average developer.

I don’t want to throw your own words back at you but do you remember writing this?

First, it had been assumed by many, including the architecture group,
that software architecture would be a readily teachable skill. This project
uncovered six competent, experienced developers who showed little aptitude
for creating architectural models.

Second, if this finding generalizes then it limits the way companies can use
software architects. The traditional approach to adopting a new technique is
simply to train people to use it, but this pilot shows that a software
architecture training program may be ineffective.

If you still believe what you wrote, then maybe you should be targeting architects not general developers. More bad news for you is that I think the use of the title “architect” is diminishing. Those folks still exist but they have new titles now. Or do you think you now have a better way to bring them along and make more “architects”?

Sorry to be so glum. I kept going because I was hoping to come up with a good angle for you. Instead, I just ended up with a long depressing discussion.

Your friend,

Larry

 
Posted in Software craftsmanship | Tagged , , , | 1 Comment

Visibility -> Retrospection -> Adaptation


This is too funny.

On the one hand, we have George Fairbanks (who comes from the architecture world) arguing that data models are too low level to be considered architecture. On the other we have, John Owens (a leader in business process modeling), arguing for them in the context of his integrated modeling method.

My point is that it doesn’t matter whether you consider them part of the architecture or not. It doesn’t matter whether they are in YOUR definition of a good process. The rise of Agile is a response to the insistence upon adopting someone else’s approach. This resulted in misapplication and the developers became cynical.

If you don’t want to be dismissed by the Agile world, you have to start selling in smaller chunks and let the developers decide which chunks to use. For me (a process, quality, and security guy), the easy sell is some form of peer review and the effective use of passively gathered data. For John, it might very well be data modeling. It’s a bridge between the business and development domains. I’ve read the excerpt of his ebook on this subject, and it is one of the clearest explanations of the topic I’ve ever read. I’m not sure what chunk is the easiest sell for George but his ability to extract abstractions (even from domains other than software) has impressed me greatly over the years. I just don’t know how he can best share that.

What I do know is that we should stop pushing integrated anything. We should stop defending our definitions. Rather, we need to target our efforts at the primary form of the Agile feedback loop (Visibility -> Retrospection -> Adaptation) and make our case in the retrospection phase after they’ve experienced trouble by not doing enough design, requirements, or quality activities. Don’t worry about the second form of the Agile feedback loop (Plan -> Build -> Inspect -> Refactor). It is more easily expanded as we’ve seen with the introduction of Iteration Zero proposals.

 
Posted in Software craftsmanship | Tagged , | Leave a comment

We have to get PEOPLE to build this stuff


I started to write a long comment in response to a post on my friend George Fairbanks blog, but when it got above three paragraphs, I decided to move it over here. The question was whether or not data modeling should be considered part of software architecture.

The short answer is, “Yes” but my reasons why may surprise you.

I’ve sat in many a team meeting where early design decisions are being made. Nothing helps engage the team better, than settling on a data model. At levels above that, where you just say, here’s an employee, here’s a customer, the team will just nod and agree to whatever the architect/design lead/etc. is saying. It’s only when you start to ask, “How are we going to represent these objects in the system?” (and really only after you discuss which objects go in which tables), do you start to get folks to actually think about (and commit to) building it. I’m not sure you have to go down to the data modeling level all the time, but for many situations you have to do it to get the team engaged.

Developers won’t require that you decide in the meeting whether or not it’s a varchar or a string(20), but they may not engage unless the conversion includes things like, “Are we going to use the same table to store customers and employees.” That’s when you start to ask questions like, “Is an employee ever going to also be a customer? If so, is it OK for them to have two separate entries in the system? How much data is in common between the two? How much is different?” This does get a little close to George’s fear about discussing N-th normal form but there it is.

But that wasn’t the original question. Nobody is saying that data modeling isn’t useful. We’re just asking whether or not data modeling should be considered part of software architecture.

Never forget, we have to get PEOPLE to build this stuff. The single most important driver for the success of a software project is the commitment of the people doing the work. If the coders think that the plan was pulled out of the air or the design is wasteful, pipe dreaming, or otherwise deficient, they will passive aggressively kill your project. If you’ve ever been part of an effort where the folks on keyboards think, “the PowerPoint architects who designed the systems are idiots”, you understand what I mean. So the number one priority is to get the developers to say, “our design”. If they ever think, “his design”, you are much more likely to fail.

The architect’s job is to get the team to commit to a good design. It’s not to deliver a good design to the team.

And the only way to be certain, that the team has committed to it, is to get them to find it themselves. You can tell them until you are blue in the face that you have analyzed this nine ways to Sunday and you are convinced… blah, blah, blah. I repeat, the only way to convince them, the design is good is for them to come up with it on their own.

As a great architect, you should have already walked through the issues yourself. Certainly, you’ll use the higher level abstractions that you are writing about, George, to think things through. But you can’t stop there. If you throw your design over the wall at that point, who knows how it will turn out. And don’t tell me that you sent it out for comment. Nobody reads that stuff. You have to host a meeting or a series of meetings, preferably with a good facilitator/coach (like me 🙂 or Noopur Davis, who does this better than anyone I know). You go into the meeting with an architecture in mind and probably even well documented, but don’t hand those documents out. Start with a blank whiteboard. Add a simple block diagram and proceed from there with the Socratic method. Someone will suggest something you’ve already rejected and you’ll say, “Sounds reasonable, but what about foo?” They will then either 1) reject it like you did; 2) suggest a creative way around foo; or 3) convince you why your assumption or priority is wrong. Every time, I’ve seen this done, the architecture is different coming out of the meeting than going in. Sometimes the improvements are minor and might have happened anyway during implementation. More often, the changes are major.

If the architecture is for a large system of systems, then you may only have sub-team leads/architects in the meeting and they will probably not want to talk at the data model level. But when architecting for a 2-pizza team you better be ready to go down to the data model level.

You should of course follow up the meeting by delivering back to the team, the architecture that they came up with in the meeting. If it’s close to what you had already documented before the meeting, so much the better. To do this effectively, you may need several meetings at different levels. Don’t worry about the cost/time. If you delivered a finished architecture, you’d have to spend time training them on it anyway. This is just a fun (and much more effective) way of accomplishing that goal of them understanding what they are going to build. Your biggest problem will be resisting the temptation to share your wonderful architecture with the world and get credit for it. Your best approach is to make sure the team gets all the credit.

If the book you are writing is for software architecture researchers, then you can ignore everything I’ve written. If on the other hand, you are writing something that can be used by practicing architects, then never forget, we have to get PEOPLE to build this stuff.

 
Posted in Software craftsmanship | Tagged , | 4 Comments

Adopting Agile doesn’t mean forgetting what you’ve learned


Agile is particularly attractive to two very different groups: 1) those whose organizations don’t already have evolved practices, and 2) those whose processes have grown to become burdensome. In both cases, there is a tendency to make your first Agile sprints have a bare minimum process. After all, that’s what Agile says to do, right?…

Not right! It’s a common misconception, but minimum weight is not the controlling function for Agile adoption.

The controlling idea of Agile is learning (visibility and inspection/retrospection) and applying that learning (adaptation). Agile practices are predicated on the idea that trying to apply someone else’s process template to your situation is rarely ideal and often counterproductive. Rather, the agile approach is principled-based, allowing you to adapt as you learn and your situation changes.

So what does this mean? Well, if you currently have over evolved processes, don’t throw the baby out with the bath water. If your organization has experienced problems in the past and the situation hasn’t changed in a way that invalidates that “learning”, don’t necessarily throw out all process elements that resulted from those experiences. Similarly, if you are just getting started, don’t be afraid to learn from others who have gone before. In fact, using what you’ve learned is the foundation of the Agile approach.

Let me give you a concrete example. It’s no coincidence that every strong team that I’ve ever seen has mandated some means to get another set of eyeballs on the code. In some organizations, this means “formal inspections”. You may believe that formal inspections cost more than it’s worth for your situation and I won’t disagree with you, but even XP advocates pair programming. Open source development has the “many eyeballs” effect built into their licensing and  commit practices. For closed source practitioners, what I’m starting to see more is some form of asynchronous peer review using tools like Google Code Reviews.

Why do all these strong practitioners utilize some form of peer review? Because we’ve shown time and again, in various and sundry quantitative research, as well as in qualitative studies, that it’s the single most efficient thing we can do to remove defects from the code. In general, it’s more efficient than testing at removing all kinds of defects PLUS it allows you to systematically address things that testing cannot like maintainability/evolvability (code smell issues), as well as things that are nearly impossible to find with testing including certain kinds of security and concurrency issues. Why then do I see Agile teams going through their first few sprints without any form of peer review?

Similarly, there is a tendency to swing too far on the issue of design. Sure, I’m a firm believer in YAGNI and that the best way to improve the design is to evolve working code. I’ve suffered the paralysis of analysis. I’ve even been the guilty party. However, refactoring is not a substitute for design. It’s very difficult to achieve certain non-functional requirements (scalability, security, etc.) without some architecture work up front. Similarly, depending upon your situation, appropriate requirements elicitation and documentation practices can save much more than they cost.

The good news is that Agile supports doing these things in its “definition of done”. Furthermore, if you fail to do them, it provides the feedback loops that will highlight the need for them later. However, do yourself a favor, when you are trying to settle on your FIRST “definition of done”, include at least some form of peer review for your code.

 
Posted in Software craftsmanship | Tagged , | Leave a comment

First post


Now that it’s the new year, I’m going to try this blogging thing again. Topics will include:

  • Software engineering:
    • Process
    • Agile methods
    • Measurement
    • Static and dynamic analysis
    • Test-first design and automated unit testing
    • Tools
  • The software development industry
  • Flex and Actionscript
  • Anything else that seems interesting
 
Posted in Administrivia | Tagged | Leave a comment