REST for a read-only API. I finally understand the benefit of HATEOAS.


I’m neck deep in RESTful APIs. I’m in the late stages of designing the Analytics 2.0 APIs in my job as Product Owner for Analytics at Rally Software. I’m in the early stages of designing an API for the Lean Software and System’s (LSSC) Kanban Benchmarking and Research Program… and just today, I met with Steve Speicher and others from the Open Services for Lifecycle Collaboration (OSLC) to talk about joining one of the working committee designing RESTful services for data interchange.

I have been struggling with one aspect of “pure” REST APIs as defined by the four constraints in Roy Fielding’s dissertation. In particular, it seems that almost none of the popular “RESTful” APIs on the internet implement the Hypermedia as the Engine of Application State (HATEOAS) constraint. If the constraint is so critical, as Dr Fielding insists, then why is it so often ignored? The idea is that operations that you perform on a resource should be made visible to clients not via documentation but rather via links in the responses to a few published endpoints. A good citizen consumer of a truly RESTful API will not know the link to transfer bank funds, or approve a leave request, or (any action) on (any resource). Rather, it will “discover” the links for these actions in the initial response to a query against the bank account, or leave request, or (any resource).

My main problem with valuing this constraint as highly as Dr Fielding seems to is that it doesn’t seem to greatly reduce coupling as intended. The client still needs to KNOW that you want to transfer, approve, or (some action). Knowing the link to do so before hand seems like a fractional increase in coupling. That said, if it doesn’t cost too much, it would be nice to follow this recommendation because it would fractionally reduce coupling.

That brings me to my second conundrum. The Analytics 2.0 APIs are all read-only. What “application state” (as opposed to resource state) is there in such a situation? Then today, it finally came to me… the only application state (that I can think of) in a read-only API is around paging. The first page response to a multi-page request should include the link to the second page, and so on. This is particularly useful for Rally’s Analytics 2.o Lookback API because we were already recommending that the requests for the subsequent pages include an additional clause to ensure that the data returned for the subsequent pages is for the same moment in time as the first page. We had little confidence that this recommendation would be followed. Now, I’m specifying that we add a NextPage link to each response. We may also remove the “start” parameter from the API as it enables a tighter coupling than the use of the NextPage link.

 
Posted in Ongoing work, Software craftsmanship | Tagged , , | 1 Comment

GitHub is a social media platform


My first few days on GitHub reminded me of my first few days on Facebook…

I’ve been using git (not well) for a while at Rally, but in the last several weeks I started using GitHub and I can’t say enough how pleasant the experience has been.

I just wanted to tweak the CoffeeScript.mode of Panic!’s Coda editor which was provided not by Panic! but hosted on GitHub by some other code-head like me. I downloaded it, edited it to add the functionality that I wanted and was on my merry way to using it, when I thought, “maybe I should share my improvements back to the community that was was kind enough to provide me with the mode in the first place.”

So, I sent a message on GitHub and included my code changes in the message. The repository owner sent me back a message encouraging me and suggesting that I submit a “pull-request.” What the heck is a “pull-request”? I thought, so I started investigating. I learned that to do a pull request, I couldn’t just download the project’s contents; I had to actually fork it. I’d seen the “Fork me on GitHub” banners on every GitHub page but I never thought much about it before. Just a few clicks and I’m signed up with an account and few minutes later, I’ve forked the project, committed my edits, and submitted my pull-request.

The repository owner wanted me to upgrade my code to support more use cases so another round of edits ensued and then he accepted the request. Since then, I’ve created a couple of other repositories of my own and started managing my code on GitHub. I’ve downloaded the Mac GitHub client and I’m fully sold.

Then today, I was describing the entire experience to someone and it occurred to me that the appeal of GitHub is not technical; it’s social. It has the same sort of pull as social media. My first few days on GitHub reminded me of my first few days on Facebook… I had to create a Facebook account and started using it because someone tagged me in their 20 interesting things about me post. Facebook’s value is directly proportional to how many of my friends and family use it. Similarly, GitHub’s value to me is directly proportional to how many projects that I use are on it… and how much of the potential audience for Lumenize (my as yet unreleased PhD tools that I hope to commercialize… or at least popularize) are on there. I’m now trying to think of ways to redirect Lumenize to be more appealing to the GitHub audience. If folks can fork Lumenize and help it to grow, all the better.

 
Posted in Software craftsmanship | Tagged , | Leave a comment

My first Panic! Coda plugin


I’ve been doing more coding in CoffeeScript and as any good craftsman knows, you must sharpen the saw from time to time. This time, I added some convenience commands for doing line manipulation by writing my very own Panic! Coda plugin.It will allow you to delete, duplicate, and move lines with a keystroke. There is also a convenience shortcut for wrapping a variable in console.log.

It was actually pretty easy using Panic!’s Coda Plug-in Creator. I wrote my text manipulation code in javascript as shell scripts that would run under node.js and simply dragged them into the Coda Plug-in Creator.

You can find it on GitHub here.

 
Posted in CoffeeScript | Tagged , , | Leave a comment

Coda vs WebStorm vs TextMate for CoffeeScript


I recently started rewriting all of my PhD tools in CoffeeScript from ActionScript. I’m loving the python-like syntax and I’m very happy with the language as well as the ecosystem around CoffeeScript and node.js. Tools are an important part of the experience so I’ve played around with several editors/IDE’s for writing CoffeeScript. Here is a quick comparison.

Coda

This is probably my favorite environment right now. I’m using a custom CoffeeScript.mode that I forked from Sean Durham to add navigator support.

Pros:

  • Nice clean interface written for the Mac
  • Syntax highlighting that works in both .coffee and CakeFile
  • Instant startup/shutdown and low resource usage. I can code on battery for 4+ hours on my MacBook Pro.
  • Preview mode for HTML.
  • Hackable. I already upgraded the 3rd party mode file. I have thoughts on plugins for running unit tests and reporting code coverage.

Cons:

  • Only subversion integration. Personally, I never use the source code integration built into editors and IDE’s so this is not a problem for me. But EVERYONE uses git for CoffeeScript. I just started using GitHub for Mac. I’ll see how that goes but I can always use the command line.
  • No code folding. Again, not a feature that I miss too much because I can put each class into a different file. The navigator is also a decent substitute.

WebStorm

Pros:

  • Refactoring.
  • NodeUnit support built in. Nice!!!
  • Keystrokes that make sense to me. Ctrl-Y means yank this line. It’s the only one of the three that does that.

Cons:

  • Still beta quality
    • The syntax highlighter doesn’t like multi-line comments “###”…”###” to have any “#” inside. Until they fix this, I’m not going to use it again.
    • My .idea folder got corrupted several times and each time it lost my configuration foo for running nodeunit tests.
  • Heavyweight. It’s essentially the IntelliJ IDE with java editing (and a bunch of other functionality) removed.

TextMate

I started writing CoffeeScript in TextMate and I still pull up .coffee files for a quick look in TextMate so this is my backup.

Pros:

  • Syntax highlighting that works. I think TextMate’s is the basis for the Coda one that I use.
  • Folder as a project. I like this concept and you can sorta do this in Coda but it’s nice to open one project/folder with tabs for all of the files that are part of that project but still be able to open separate windows for other files that I randomly need to edit. If you right-click…open in Coda, it’ll create another tab in the “workspace” that you are working in even if the file has nothing to do with that project.

Cons:

  • None really other than it missing some of the Pros of Coda.
 
Posted in CoffeeScript, Software craftsmanship | Tagged , , , | Leave a comment

The defect 4-step


What to do when you find a defect

The defect 4-step is not a new dance craze. It’s a way to accomplish organizational learning from the opportunity provided by a defect. So, here is what you should do when you find a defect:

  1. Fix it.
  2. Find and fix others “like it”. “Like it” could be along several dimensions and at multiple levels of abstraction (see below). Code query can really help here but sometimes a manual analysis can be effective.
  3. Prevent future occurrences “like it”…
    • from leaving the developers desk (preferable). This is often satisfied with some from of static analysis that runs during the build process or at the developers desk (think, upgrading the compiler). Frequently, the best you can do is share the bad news or conduct training so developers know what patterns lead to defects “like this”. Sometimes, you can change your technology or the programming paradigm to make the defect impossible. Switching from C to Java to avoid certain memory problems for example.
    • from getting into production (fallback). Tests that run overnight or manual tests that become part of the test plan are common approaches. Adding the meta-pattern(s) to a code review checklist can also help especially if it increases awareness and prevents defects “like it” from ever being written. Long-running overnight static analysis is sometimes in this category but it is preferable to run static analysis in the build process or on the desktop.

Why is this called a 4-step?

When I originally created this post, I had the two prevention “steps” broken out. In reality, you generally only do one or the other “prevent” steps. So, I guess I could rename this the defect 3-step.

What does “like it” mean?

The definition of “like it” for any given defect might be along several different dimensions and at multiple levels of abstraction. An example is the best way to illustrate this. We recently had a “stop-the-release” P1 defect found on the Friday before a Saturday release. An entire part of the application didn’t work in certain web browsers. There was an extra trailing comma added to the javascript that most browsers ignore but some complain about as a parsing error. Normal testing includes those browsers but didn’t catch it because testing was done on a version before the defect was injected. The definition of “like it” in this circumstance is along two different dimensions. One dimension of “like it” might try to address the need to make sure testing occurs on the latest version. A second dimension of “like it” might target the trailing comma pattern. The simple answer in this case was to implement JSLint, a static analysis tool for javascript. This is a great solution because it is at a higher level of abstraction from merely fixing a trailing comma defect. JSLint can be used to mitigate against ANY javascript parse error. It’s often more work to address the higher level of abstraction but it’s also usually more valuable. This trade-off decision should be made on a case-by-case basis but higher levels of abstraction should generally be favored. Testing is rarely able to climb the abstraction ladder so static analysis is favored.

 
Posted in Software craftsmanship | 1 Comment

How can an Agile team influence the quality of upstream components


Q: I would be interested in anything you might have about rolling out Agile on a team that depends on components created by non-Agile teams. In particular, how that is affected by different approaches to Quality between teams in the same company (and no, we don’t have a common standard, only on paper). I was reading about the GM/Toyota experiment (first TPS plant in the US) and how GM had issues trying to roll it to other plants, one of the biggest one being the fact that unlike in Japan, they didn’t have the power to push their process down to their producers. They quickly found out that they could not build a Quality car without Quality components; and I am afraid we will find the same here.

A: The reason quality is generally higher with the output of Agile processes is related to the nature of feedback loops built into Agile. We feedback on the product/design much more rapidly. Practices like pair programming or lightweight peer review, automated testing, short iterations, automated build/continuous integration, and close collaboration with the customer/proxy, all tend to give us more feedback on the product/design… which tends to lead to higher quality. My recommendation would be to try to drive as much of those feedback loops up stream as possible. You don’t control their process but you may be able to influence it at the boundary between them and you.

Close collaboration.
The lowest hanging fruit is probably the close collaboration with the customer one. In this instance, the Agile team is the customer. The non-Agile teams are the vendors. I’m thinking of setting up regular demo/review meetings (probably on the cadence of the Agile team – short iterations). You may also be able to visit (virtually or physically) on a near daily basis.

Automated testing. You might also try setting up automated testing at the interface level for the components delivered by the upstream teams. You’ll have to avoid the trap of using this as “contract negotiation over collaboration” but that is in how you handle it. The key here is that you want them to think of the tests as a tool to help them do their job as opposed to a way to enforce something. This means that they will need ability to run the tests before delivering to you. It would be better still if they owned the tests and you reviewed them. No matter who owns them, the tests become the specification for the API, which is a good Agile smell.

Peer review. At this point, you are collaborating/reviewing the test code. This might then lead to a situation where you might be able to do peer review of their production code. I’d prefer a peer review approach that helped them improve their code (and learn how to write better code in the future) over one that just allowed you to fix their code after the fact.

Automated build. If you were to give them access to your build process, they would also be able to test the compile-time agreement between their code and yours. This comes with two immediate benefits: (1) it serves as an additional automated test of of the interface, and (2) this (combined with the other automated tests) gives them more confidence to refactor their code and make improvements. The assumption here is that most teams know their code has warts but they are afraid to modify it to improve it because they are afraid of breaking code that depends upon it. Running your build script lowers the fear.

There is a third (and potentially more powerful) benefit to a shared build process. It provides you with a place to plug in other quality improving tests and analysis. The automated testing that I proposed above are tests that run against their upstream code. With an automated build, you could include tests that run against your downstream (but higher level) code. This means that they could see if changes that they make break your higher level functionality. You’d have to use a stable version of your source so they could be sure the problem was theirs but a distributed source control tool or careful branch management could overcome that obstacle. The build is also a common place to run automated bug finders like FindBugs or even custom analysis like a tool to highlight any changes in the calling signature.

Please let me know if any of this helped. Maybe I can refactor and improve my answer (upstream product) based upon your feedback (from downstream). 😉

 
Posted in Software craftsmanship | Tagged , , , | 2 Comments

Top 10 questions when using Agile on hardware projects


Recently, I have had the chance to work closely with a number of projects that were not pure software. They all had some software or firmware component but they also included an electronics or even mechanical design aspect. Below are the top ten questions I recorded when working with these teams and the answers on how the teams effectively answered them.

1.    Are Agile practices and processes effective when conducting non-software projects (firmware, electronic, mechanical, etc.)?

Absolutely. Some of the XP engineering practices are not directly applicable or need to be modified based upon industry or your particular situation. However, surprisingly minor adjustments are all that is necessary for the Scrum process framework to be highly effective… even compared to processes that evolved specifically for hardware.

2.    What adjustments need to be made to make to the Scrum process framework work well for these projects?

Surprisingly few. The primary adjustments center around expectations in two general areas: (1) minimal marketable feature/emergent design/thin vertical slices, and (2) user stories.

3.    What adjustments need to be made to our expectations around minimal marketable feature, emergent design, and thin vertical slices?

Focus on feedback. True, the break even point between thinking versus building encompasses more thinking for non-software projects. Even so, push to build something sooner and even when you are working on early design and infrastructure, get feedback on some “product” each iteration… and get that feedback from a source as close to the user as possible. In software, it’s almost always less expensive and more effective to build something rapidly, get feedback on it from an actual user, and change it, than it is to think longer about how to build it expecting to avoid later rework. This is the primary reason why Agile practices encourage you to break up the work into increments of minimal marketable features (MMF). Agile software projects try to build as little infrastructure as necessary to implement the current MMF and let the design emerge from those increments rather than nail down all the requirements and design up front. You are encouraged to build the system in thin vertical slices where all levels of the product experience changes with each increment.

When confronted with this idea, even software-only teams push back. Software teams want an architecturally sound base upon which to build their features. However, I have found that if the team thinks on it for a bit, they can often find a way to build usable features and start to get feedback much sooner than the team originally imagined. Those projects are able to deliver a marketable feature even in the first few iterations and they move rapidly to a mode where very little of each iteration is spent on infrastructure. The primary difference for hardware projects is that you have a different cost structure for manufacturing so the break even point for thinking versus building encompasses a bit more thinking. This means that it may take longer to get into the mode of designing several MMFs each iteration.

Remember, the primary benefit of this approach is to get the most valuable possible feedback as often as possible. So even when it is hard to use thin vertical slices to accomplish that, you should still seek out opportunities to enhance the richness and frequency of the feedback you receive by producing something to get feedback upon during EVERY iteration. The next step down from demonstrating an MMF is producing a prototype but even that can be hard to do every iteration of a hardware project. So when the thing you produce is only a document, a design, or an experiment, make an effort to maximize the value by choosing to get your feedback from a source as close to the customer as possible.

4.    What adjustments to our expectations need to be made around user stories?

Understand that the “user” in user stories only hints at one of four good reasons to manage requirements with user stories… and it’s not the most important reason, “conversations.” Maybe they should be called “conversational stories.” The big hang-up of hardware teams managing requirements with user stories focuses around the word “users”. That’s unfortunate because I don’t think that is the most important benefit that Agile teams (even software teams) get from the practices surrounding user stories.

The benefits come from four aspect: (1) WHO, (2) WHAT, (3) WHY, and (4) conversations. This last one, conversations, is the most valuable so I’ll talk about it first.

Even when drafts of requirements documents are shared with the development team for early feedback, the development team doesn’t internalize them until they start working with them. By having the team size, and often write user stories, you force them to start this internalization much sooner. The early conversation between the development team and stake holders around the requirements allows for implementation costs to be factored into requirements tradeoff decisions. The ongoing conversation throughout the project lifecycle, provides a high-fidelity communication channel and continuous vision alignment.

Now, let’s address the other three beneficial aspects one at a time. The traditional user story format is, “As a WHO, I want to WHAT, so that I can WHY.” All three of these, WHO, WHAT and WHY, provide benefits. The tension of trying to always make the WHO be an end user is not unique to hardware. Even in the most Agile of software-only teams, there are times when the end user is only indirectly the beneficiary of a particular backlog item. For instance, the most direct beneficiary of a research activity, a mockup, or a prototype (collectively referred to as a “spike” in the agile world) is the development team and not the end-user. In those cases, specifying the WHO does not encourage you to think about the product from the end user’s perspective. If every user story were this way, then we wouldn’t call them “user stories”. The “user” is in the phrase to remind us to get the user perspective involved as often and as soon as possible but just like MMFs, this practice is harder to do as often especially early in a hardware project.

I will not dwell on the WHAT because this element is present in all approaches to requirements management, except to mention that it is important for the what to not drift over into the HOW so the development team has flexibility in how they meet the identified need. Note: The Rally tool’s entity for “user stories” is really more of a generic backlog item. There is nothing in the tool that enforces or even makes awkward, the use of this entity in a traditional work breakdown mode.

On the other hand, the WHY is somewhat unique to the practice of user stories and can be very valuable. Understanding why someone wants something empowers the development team to be creative about satisfying a need… sometimes even by explicitly not satisfying the WHAT of the user story.  If a team is told that it must implement a data layer with sub-millisecond response, they may blindly go about accomplishing that… at great cost. My first response to a user story written like that is that it crossed the border from WHAT and into the realm of HOW. Nevertheless, even if you give a team a user story like that but you also tell them that the reason for this “requirement” is the responsiveness of the user interface, they may take steps to provide low latency to user input even when the data does not make it all the way to the data layer for a second or more… saving cost AND improving the product.

5.    What about prioritizing user stories strictly by value to the end user?

Prioritize by overall business value, not end user value. Even in software-only projects, user stories should be prioritized by overall value to the business, not the end user. Often that is the same thing and certainly, the end user’s needs are the biggest factor in prioritizing any user story with an end-user as the WHO. However, a feature that is desirable to the end user but not saleable might not be valuable to your business. Similarly, valuable features that are too costly (either to produce or as tradeoff for against other desirable features) might not be good decisions.

Apple has been criticized for excluding multi-tasking from the iPhone. They realized that multi-tasking negatively impacted battery life and user interface responsiveness and explicitly left it out of product. They made a business decision that they could still sell the iPhone even without this high profile feature.

However, before they made this decision, they needed some information. How much did background tasks hurt battery life and responsiveness? How amenable will potential customers be to purchasing a product without it? Apple can easily justify investments in research to determine the extent of this impact on both the usability and marketability of the product. This information is of no direct benefit to the end-user but the work necessary to gather it, was of immense benefit to the business.

Development projects of all kinds benefit from good design and marketing decisions. Backlog items focused solely on these outcomes are of value to the business and should get appropriate prioritization. Similar to the above discussion on MMFs and the WHO in user stories, it just may be that non-software projects experience more of these tradeoff analysis backlog items early in the project and they keep seeing them longer into the development cycle.

6.    Should user stories be our only tool for requirements management?

Not usually for hardware/mixed projects. There are many reasons why you might want some other mechanism to compliment your user story practice. For instance, the concept of abuse cases is often part of a larger security review. Safety reviews often have a parallel mechanism. Protocols and other interfaces are best defined by other means. Hardware typically have requirements associated with the operating environment. Etc.

7.    But user stories are not even an official requirement of Scrum so why shouldn’t we just use our traditional requirements practices?

Consider alternatives but remember all four valuable aspects of user stories. It is true that the official definition of Scrum simply calls for there to exist a backlog of work. It only mentions user stories in a sidebar and even then, the sidebar also mentions other approaches like use cases. The essence of Agile is (1) self-organize, (2) do something, and (3) inspect and adapt. The definition of Scrum is just one step more detailed than this essential definition of Agile and is intentionally minimalistic so any iterative agile approach would fit.

User stories have emerged as a common and valuable practice because of the reasons mentioned above but it is not strictly required. Your team should feel empowered to consider alternatives.

However, if your team chooses another approach to doing requirements management, you should not deviate from the agile practice of allowing the development team to do the estimating. Also, I encourage you to think about the reasons the practices surrounding user stories are valuable (other than the emphasis on the user) as described above and enable as much of those reasons as possible, starting with the conversation aspect.

8.    What about when we need to send a board (or prototype part) out for manufacturing and it will not be done within an iteration?

Push for rapid prototyping but adapt to your capability.
This is a very specific question that comes up often when folks are told that they need to produce something upon which to get feedback during each iteration. What if the time it takes to get prototype parts back from manufacturing is longer than an iteration?

My first response is to ask yourself, “Is there ANYTHING that we can do so that we CAN produce a prototype in a iteration?“ The world of prototyping has attempted to keep up with the ever-increasing pace of change. There now exist component suppliers that allow you to upload a part design in the morning so that they can produce and ship it overnight. Those services are expensive but so is the time of your team. Failing some solution like that, “Is there a different way to produce something to get the answers and feedback we need for  decisions within a single iteration?”

If you still cannot think of a way to produce it within one sprint, you can handle it by breaking the backlog item down. The first portion includes whatever work is necessary to place the order for the part. The later portion includes any evaluation activities. Collectively, they have value to the business.

9.    What about dependencies and critical path analysis?

Supplement when needed but ask if it is really needed. Dependencies are considered by the product owner and the development team when choosing stories for a particular iteration. However, the consideration of dependencies is informal and not explicit like in a Gantt chart format (think Microsoft Project). I have worked with teams where explicitly and continuously conducting this sort of critical path analysis is…. well…  critical to their success; but I have worked with many more teams where the use of a Gantt chart is merely the default and what they are used to. For those projects, the most important thing is for each team member to know what they should be working on right now and have a sense of urgency about getting it done-done! The mechanisms in the Scrum framework are highly effective at accomplishing this. If you do need to conduct critical path analysis at some point, I suggest that you do it only as needed.

Note: The Rally tool includes functionality for you to record dependencies so that they are readily available when you are making decisions about what to work on next.

10.    Maybe we don’t need continuous critical path analysis, but we still have specialists that are not permanently dedicated to the team. How do we deal with that?

Favor cross-training and using generalist team members but fall-back to explicit allocation and coordination when necessary. The Agile approach is to fully dedicate as many of these specialists to the team as possible. Even when you know it’s not a full-time job for a particular specialty, it still might be better to supplement those specialists’ workloads with team tasks that are outside of their specialty. We find that becoming Agile tends to encourage more generalists (or at least multi-specialists) to emerge. This cross-training is generally positive on its own merit but double so when you factor in the cost of task switching and the productivity befits you get once a team learns how best to work together (think Forming-Storming-Norming-Performing).

Even so, there may still be some centralized functions that your teams will need to consult. It is often possible to handle these situations by leveraging the team’s approach to dealing with outside suppliers.

When you move the solid line from a functional manager to a team lead and make the functional manager the dotted line, it will bring up many issues like personnel reviews and career counseling. The coaches at Rally have experience with companies making these transitions and can help you with those tough issues but you will have to work through them. “Agile is easy. Implementing Agile is a bit more difficult.”

 
Posted in Software craftsmanship | Tagged , , , | 2 Comments

A multiple file loader for Flex/Flash/ActionScript 3 (AS3)


The URLMultiLoader class in this library will load multiple files and optionally “process” them before calling the method specified for Event.COMPLETE. Since file loading in the Flash/Flex/AS3 world is completely asynchronous, when you need to load more than one file, the hackish solution is to make the COMPLETE handler for the first one initiate the load for the second, etc. until all the files are loaded. URLMultiLoader will allow you to setup one COMPLETE handler which will not be called until all the files you specify are loaded (and optionally “processed”).

When I first had need for this, I said to myself that someone must have done this before. It seems like a fairly common need. However, when I went looking, I couldn’t find something that fit the bill, so I decided to write my own. It was actually a very good way to get familiar with the event system. Also, while I was at it, I figured I’d allow the injection of a processor for each file and make sure that got processed before proceeding. If anyone knows of another tool like this please post a link to it in the comments. Actually, it wouldn’t surprise me if this functionality is built into the Flex framework somewhere and I just missed it.

Update: The functionality mustn’t be in Flex because I have now found several other similar controls:

Mine is relatively simple compared to some of these. BulkLoader seems particularly featureful. It has bandwidth stats and progress indicators. For my loading needs, the sizes were small enough that I wasn’t worried about progress or bandwidth, but I may update mine to include these features in the future.

One feature that mine has that many do not have is the optional ability to inject in a an IDataProcessor that will pre-process your data before returning it to you.

DataProcessorXMLStringToArray is provided as an example IDataProcessor that can optionally be passed in when adding a new URLRequest to the queue. If provided, an IDataProcessor will convert the raw file string (or binary, or url variables) into some other form before returning. Complete documentation for DataProcessorXMLStringToArray is provided in the ASDoc header for the class but it is offered here primarily as an example. You can easily create your own and inject them when setting up the URLMultiLoader. You just need to follow the IDataProcessor interface which has one method with the following signature:

function processData(data:*):*

Remember, the processor is totally optional. If omitted, URLMultiLoader will simply copy the file contents into it’s output data field. The type of the data in that case will depend upon the URLLoaderDataFormat: String for TEXT (default), ByteArray for BINARY, and URLVariables for VARIABLES.

Let’s see it in action.

package
{
	import com.maccherone.urlmultiloader.*;
	import com.maccherone.json.JSON;  // Only used for pretty output
 
	import flash.display.Bitmap;
	import flash.display.Loader;
	import flash.display.Sprite;
	import flash.events.Event;
	import flash.events.IOErrorEvent;
	import flash.net.URLLoaderDataFormat;
	import flash.net.URLRequest;
 
	public class URLMultiLoaderTest extends Sprite
	{
		private var urlMultiLoader:URLMultiLoader = new URLMultiLoader()
		private var baseURL:String = "data/"
		private var urlRequest1:URLRequest = new URLRequest(baseURL + "file.xml")
		private var urlRequest2:URLRequest = new URLRequest(baseURL + "file.xml")  // Same file but we'll get it in a different format
		private var urlRequest3:URLRequest = new URLRequest(baseURL + "smile.gif")
 
		public function URLMultiLoaderTest() {
			var urlMultiLoader:URLMultiLoader = new URLMultiLoader
 
			var dataProcessor:IDataProcessor = new DataProcessorXMLStringToArray()  // Example provided with URLMultiLoader. You can create your own.
 
			urlMultiLoader.addURLRequest("Request1", urlRequest1, dataProcessor)
			urlMultiLoader.addURLRequest("Request2", urlRequest2)  // If no IDataProcessor is provided, then file's contents is returned as String, ByteArray, or
			                                           // URLVariables depending upon the URLLoaderDataFormat TEXT, BINARY, or VARIABLES respectively
			urlMultiLoader.addURLRequest("Request3", urlRequest3, null, URLLoaderDataFormat.BINARY)  // Loads smile.gif as a ByteArray
 
			urlMultiLoader.addEventListener(Event.COMPLETE, filesLoaded)
			urlMultiLoader.addEventListener(IOErrorEvent.IO_ERROR, onError)
			urlMultiLoader.load()
		}
 
		private function filesLoaded(event:Event):void {
			var data:Object = (event.target as URLMultiLoader).data
			trace("Array of Objects:\n" + JSON.encode(data["Request1"], true) + "\n") // Uses JSON.encode for pretty output
			trace("String of file contents:\n" + data["Request2"] + "\n")
			var loader:Loader = new Loader();
			loader.loadBytes(data["Request3"]);
			this.addChild(loader)  // Displays smile.gif in Flash player
		}
 
		private function onError(event:Event):void {
			trace(event)
		}
	}
}

Assuming you put the file.xml and the smile.gif , in a data/ folder below the bin-debug directory and you have the correct security settings, the above code would result in the following output:

Array of Objects:
[
    {"id": 101, "name": "/db/node/visitor"},
    {"id": 102, "name": "/db/node/observer"},
    {"id": 103, "name": "/ui/button"}
]
 
String of file contents:
<?xml version="1.0" encoding="UTF-8"?>
<root>
  <file>
    <id>101</id>
    <name>/db/node/visitor</name>
  </file>
  <file>
    <id>102</id>
    <name>/db/node/observer</name>
  </file>
  <file>
    <id>103</id>
    <name>/ui/button</name>
  </file>
</root>

Plus it will display smile.gif in the Flash player like this:

smile_in_flash_player

You can download it from here.

Update: I altered the URLMultiLoader to use a string as the key to retrieving the data after the loading is complete. A previous version used the URLRequest as the key for Dictionary object. This version does not depend upon Dictionary.

 
Posted in Code, Flex/Flash/ActionScript | Tagged , , | Leave a comment

ActionScript 3 (AS3) JSON encoder with “pretty” output by adding linefeeds and spaces


I’m sure many ActionScript 3 or Flex developers have used the as3corelib for one reason or another. It’s a wonderful little library with lots of useful functionality. I’ve frequently used its JSON  encoding and decoding functionality. It works great, however, it doesn’t add spaces or linefeeds to make the resulting JSON string more readable. That’s fine if you are just serializing something to send over the wire but not if you want it to render something that is easily read by a human. In my case, I want a user to actually be able to edit the resulting JSON. To make this workable, I needed a JSON encoder that would add appropriate linefeeds and spaces. Rather than write my own, I simply adapted the one in as3corelib.

One side benefit of having done this is that you can now get JSON serialization without getting the entire as3corelib library.

The default interface is identical to the one in as3corelib. If you just call JSON.encode(my_object), it will behave almost exactly like the one in as3corelib. I say “almost” because my version add a space after each “:” and “,” even in default mode. Update: I’ve changed it so it behaves exactly like the serializer in as3corelib so no extra spaces are added unless you use the optional parameters described below.

If you want linefeeds and truly “pretty” output, you can add an optional second parameter, like so JSON.encode(my_object, true). This will cause any array [ ] or object { } that would be longer than 60 characters to wrap to newlines, which works out about right for my purposes.

You can also adjust the maximum line length with an optional third parameter, like this JSON.encode(my_object, true, 10). This will cause any line above 10 to wrap. If you want every array [ ] and object { } to wrap, just use any number 2 or lower in this third parameter. If you want it to wrap everything but empty objects or arrays, use 3 for this parameter.

Let’s see it in action.

package
{
    import com.maccherone.json.JSON;
    import flash.display.Sprite;    
 
    public class Tester extends Sprite
    {
 
        public function Tester()
        {
            var obj1:Object = {
                commit: {file: "commit.xml"},
                commit_detail: {file: "commit_detail.xml"},
                file: {file: "file.xml"},
                person: {file: "person.xml"},
                count: [1, 2, 3]
            }
            trace("Just like as3corelib (no line feeds or spaces):\n" + JSON.encode(obj1) + "\n");
            trace('"Smart" linefeeds:\n' + JSON.encode(obj1, true) + "\n");
            trace("Only allow short lines:\n" + JSON.encode(obj1, true, 10) + "\n");
 
            var obj2:Object = {
                "glossary": {
                    "title": "example glossary",
                    "GlossDiv": {
                        "title": "S",
                        "GlossList": {
                            "GlossEntry": {
                                "ID": "SGML",
                                "SortAs": "SGML",
                                "GlossTerm": "Standard Generalized Markup Language",
                                "Acronym": "SGML",
                                "Abbrev": "ISO 8879:1986",
                                "GlossDef": {
                                    "para": "A meta-markup language, used to create markup languages such as DocBook.",
                                    "GlossSeeAlso": ["GML", "XML"]
                                },
                                "GlossSee": "markup"
                            }
                        }
                    }
                }
            }
            trace("A bigger example from JSON.org:\n" + JSON.encode(obj2, true));
        }
    }
}

The above code would result in the following output:

Just like as3corelib (no line feeds or spaces):
{"file":{"file":"file.xml"},"commit":{"file":"commit.xml"},"commit_detail":{"file":"commit_detail.xml"},"person":{"file":"person.xml"},"count":[1,2,3]}
 
"Smart" linefeeds:
{
    "file": {"file": "file.xml"},
    "commit": {"file": "commit.xml"},
    "commit_detail": {"file": "commit_detail.xml"},
    "person": {"file": "person.xml"},
    "count": [1, 2, 3]
}
 
Only allow short lines:
{
    "file": {
        "file": "file.xml"
    },
    "commit": {
        "file": "commit.xml"
    },
    "commit_detail": {
        "file": "commit_detail.xml"
    },
    "person": {
        "file": "person.xml"
    },
    "count": [1, 2, 3]
}
 
A bigger example from JSON.org:
{
    "glossary": {
        "GlossDiv": {
            "GlossList": {
                "GlossEntry": {
                    "GlossSee": "markup",
                    "GlossTerm": "Standard Generalized Markup Language",
                    "ID": "SGML",
                    "GlossDef": {
                        "para": "A meta-markup language, used to create markup languages such as DocBook.",
                        "GlossSeeAlso": ["GML", "XML"]
                    },
                    "Abbrev": "ISO 8879:1986",
                    "Acronym": "SGML",
                    "SortAs": "SGML"
                }
            },
            "title": "S"
        },
        "title": "example glossary"
    }
}

Note that (unlike XML) the order of the elements in a JSON object { } is indeterminant. Of course the order of an array [ ] is preserved.

You can download it from here.

 
Posted in Code, Flex/Flash/ActionScript | Tagged , , , | Leave a comment

Measuring Craftsmanship


I’m on board with the Agile approach to software development and I have a strong history with process approaches to improvement (ISO-9000, CMM, CMMI, TSP, etc.). That said, I have always believed that the quality of the people doing the work is the biggest factor of success. The software estimation technique, COCOMO reveals this because “personnel attributes” dominates just about all COCOMO estimation models.

I think David Starr over on Elegant Code hits the right note in pointing out how this manifests itself in his post on Measuring Craftsmanship. His post starts with a distraction by arguing against the idea of measuring Agile maturity. I say distracting because I think it’s possible to agree with his message about the importance of craftsmanship no matter how you feel about the idea of creating an Agile Maturity Model.

The emphasis needs to always be on the people doing the work. In particular, I like his idea of “picking a guild” as a source for your skills criteria. Because of the way the software industry works, each organization might be its own guild, so I think it will be hard to agree upon the list of guilds and find a set of criteria most appropriate for each of them, but I think it is possible to create a map between certain practices and required skills.

For instance, if you are going to rely upon refactoring and emergent design, you better have strong design patterns skills. Same goes for tool usage. The practice of continuous integration requires build tool skills. Your team’s approach to software assurance also dictates the skills you need. If you rely heavily upon automated testing, then you need skills with automated testing tools and patterns that enable design for testability. If you rely more upon inspecition, mastering the skills of Spineliis Code Quality and Code Reading should be expected.

 
Posted in Software craftsmanship | Tagged , | 1 Comment