Announcing node-localstorage

A drop-in replacement for localStorage that runs on node.js.

I’m working on a project to do local caching using localStorage. I want it to work in both node.js as well as in the browser. I have a similar situation with my RallyAnalytics project where I use driverdan’s node-XMLHttpRequest as a drop-in replacement for the browser’s native XMLHttpRequest object. So, I decided that the best approach would be to build my caching mechanism on top of the localStorage interface. Since I couldn’t find one in the wild, I created node-localstorage. It might be useful for testing or you may have a situation like mine where you want to run the same code on the server/desktop as you run in the browser.

github repository (including usage and installation documentation)

 
Posted in Code, CoffeeScript, node.js | Tagged , , | Leave a comment

The ODIM and Performance Index Frameworks

Slides from my talk in silicon valley on my ODIM and Rally’s Performance Index frameworks for agile measurement.

 
Posted in Presentation | Tagged , , | Leave a comment

The CoffeeScript emperor has no clothes

Subtitle: when CoffeeScript rock stars run around naked in public

Just because you can do something doesn’t mean you should. Leaving the parentheses out of your CoffeeScript might make your code look a bit more “natural”, but it almost always makes the code less readable and in a many cases, it makes it impossible for the CoffeeScript compiler to understand your intent. It’s more “natural” to go around naked but you shouldn’t. Just like you need to cover your private parts when you go outside, I feel strongly that you should properly cloth your CoffeeScript function/method parameters in parentheses.

In the last few months alone, on StackOverflow, there have a been at least six questions that I have flagged as caused by this issue:

In all of the cases above, there is an idiom that you can’t precisely/clearly express without parentheses around function/method parameters. Often, the coder realizes that it’s ambiguous but “hopes” that the compiler can figure out his intent. He posts a question on StackOverflow once he is able to show that it cannot. In all of these cases, the coder has learned CoffeeScript by looking at a bunch of examples that leave out the parentheses and he doesn’t realize that you can put them in… or he wants to know how the cool-kids express it without them. In each case, the answer is invariably, “Simply add parentheses and it will work as expected.”

If this is confusing the compiler and the new users who want to be CoffeeScript cool-kids, we CoffeeScript bloggers should all make an effort to post all examples with the parentheses left in. I’m going so far as to leave them in all of my code and my collaborators seem to appreciate it. If nobody ever needs to read your code, then feel free to walk around naked in a drug-addled state within the privacy of your own home. But as soon as you step out the door, or want anyone else to read your code…

It’s perfectly fine to leave them out of if-statements, while-loop, etc. We like our female rock starts to show a little cleavage and some folks get turned on by Mick Jagger’s tight leather pants.

Chaining (needed for compiler to understand intent)

someObject.slideLeft(4).invert().slideUp(10)

Calling a parameter-less method (also necessary)

someIterator.next()

Function/method calling (even when not chaining, for human clarity)

someFunction(parameterA, parameterB)

…I even favor including them when the last parameter is a callback (although I do cringe every time I see that lonesome closing paren a few lines down so I’m OK with you leaving them out in this case).

someMethodNeedingACallback(parameterA, (err, data) -> console.log(data) )

Functions with no parameters (could live without this, but IMHO, it’s more readable with it)

f = () -> return "hello" # notice how I also like to say "return". That's another clarity choice.

One last argument to try to convince you… Python is the most readable programming language out there. The Python BDFL has come out time and again in favor of readability even at the expense of conciseness. Python used to have a parentheses-less print statement, but Guido feels so strongly that this was a mistake, that he went through the very painful backward-breaking change to require parentheses for print function calls in Python 3.0. I would go so far as to argue that we should remove this “feature” from the CoffeeScript spec but even if we don’t, we should stop using it.

I know… you want to show that you are a CoffeeScript cool-kid. You can write a one-line list comprehension that is like a brilliant guitar solo. But I think it’s cooler for my code to be readable and for the compiler to precisely understand what I intend. The list comprehension guitar solo may be showing off but we expect our cool rock stars to show off. However, it’s uncool (even for rock stars) to be drug-addled running around naked in public. Be the rock star without the drugs… or running around without your function/method calls properly clothed in parentheses.

 
Posted in CoffeeScript | Tagged | 5 Comments

Introducing coffeedoctest


The examples you add to document your project are like a map to the buried treasure that is your library/API/tool/etc. But if the examples are wrong, it’s like labeling the map with “promised land” right over the spot where it should say, “there be dragons”.

It’s less about testing your code with your documentation, than the other way around. Make sure that the examples in your documenation stay current with your code. coffeedoctest is a way to test your documentation with your code… to make sure that the map matches the terrain.

If you’ve spent any time working in Python, then you are probably familiar with doctest. Coffeedoctest is built along the same lines.

For entire story visit the coffeedoctest project page on github.

 
Posted in Code, CoffeeScript, node.js | Tagged , , , | Leave a comment

REST for a read-only API. I finally understand the benefit of HATEOAS.


I’m neck deep in RESTful APIs. I’m in the late stages of designing the Analytics 2.0 APIs in my job as Product Owner for Analytics at Rally Software. I’m in the early stages of designing an API for the Lean Software and System’s (LSSC) Kanban Benchmarking and Research Program… and just today, I met with Steve Speicher and others from the Open Services for Lifecycle Collaboration (OSLC) to talk about joining one of the working committee designing RESTful services for data interchange.

I have been struggling with one aspect of “pure” REST APIs as defined by the four constraints in Roy Fielding’s dissertation. In particular, it seems that almost none of the popular “RESTful” APIs on the internet implement the Hypermedia as the Engine of Application State (HATEOAS) constraint. If the constraint is so critical, as Dr Fielding insists, then why is it so often ignored? The idea is that operations that you perform on a resource should be made visible to clients not via documentation but rather via links in the responses to a few published endpoints. A good citizen consumer of a truly RESTful API will not know the link to transfer bank funds, or approve a leave request, or (any action) on (any resource). Rather, it will “discover” the links for these actions in the initial response to a query against the bank account, or leave request, or (any resource).

My main problem with valuing this constraint as highly as Dr Fielding seems to is that it doesn’t seem to greatly reduce coupling as intended. The client still needs to KNOW that you want to transfer, approve, or (some action). Knowing the link to do so before hand seems like a fractional increase in coupling. That said, if it doesn’t cost too much, it would be nice to follow this recommendation because it would fractionally reduce coupling.

That brings me to my second conundrum. The Analytics 2.0 APIs are all read-only. What “application state” (as opposed to resource state) is there in such a situation? Then today, it finally came to me… the only application state (that I can think of) in a read-only API is around paging. The first page response to a multi-page request should include the link to the second page, and so on. This is particularly useful for Rally’s Analytics 2.o Lookback API because we were already recommending that the requests for the subsequent pages include an additional clause to ensure that the data returned for the subsequent pages is for the same moment in time as the first page. We had little confidence that this recommendation would be followed. Now, I’m specifying that we add a NextPage link to each response. We may also remove the “start” parameter from the API as it enables a tighter coupling than the use of the NextPage link.

 
Posted in Ongoing work, Software craftsmanship | Tagged , , | 1 Comment

GitHub is a social media platform



My first few days on GitHub reminded me of my first few days on Facebook…

I’ve been using git (not well) for a while at Rally, but in the last several weeks I started using GitHub and I can’t say enough how pleasant the experience has been.

I just wanted to tweak the CoffeeScript.mode of Panic!’s Coda editor which was provided not by Panic! but hosted on GitHub by some other code-head like me. I downloaded it, edited it to add the functionality that I wanted and was on my merry way to using it, when I thought, “maybe I should share my improvements back to the community that was was kind enough to provide me with the mode in the first place.”

So, I sent a message on GitHub and included my code changes in the message. The repository owner sent me back a message encouraging me and suggesting that I submit a “pull-request.” What the heck is a “pull-request”? I thought, so I started investigating. I learned that to do a pull request, I couldn’t just download the project’s contents; I had to actually fork it. I’d seen the “Fork me on GitHub” banners on every GitHub page but I never thought much about it before. Just a few clicks and I’m signed up with an account and few minutes later, I’ve forked the project, committed my edits, and submitted my pull-request.

The repository owner wanted me to upgrade my code to support more use cases so another round of edits ensued and then he accepted the request. Since then, I’ve created a couple of other repositories of my own and started managing my code on GitHub. I’ve downloaded the Mac GitHub client and I’m fully sold.

Then today, I was describing the entire experience to someone and it occurred to me that the appeal of GitHub is not technical; it’s social. It has the same sort of pull as social media. My first few days on GitHub reminded me of my first few days on Facebook… I had to create a Facebook account and started using it because someone tagged me in their 20 interesting things about me post. Facebook’s value is directly proportional to how many of my friends and family use it. Similarly, GitHub’s value to me is directly proportional to how many projects that I use are on it… and how much of the potential audience for Lumenize (my as yet unreleased PhD tools that I hope to commercialize… or at least popularize) are on there. I’m now trying to think of ways to redirect Lumenize to be more appealing to the GitHub audience. If folks can fork Lumenize and help it to grow, all the better.

 
Posted in Software craftsmanship | Tagged , | Leave a comment

My first Panic! Coda plugin


I’ve been doing more coding in CoffeeScript and as any good craftsman knows, you must sharpen the saw from time to time. This time, I added some convenience commands for doing line manipulation by writing my very own Panic! Coda plugin.It will allow you to delete, duplicate, and move lines with a keystroke. There is also a convenience shortcut for wrapping a variable in console.log.

It was actually pretty easy using Panic!’s Coda Plug-in Creator. I wrote my text manipulation code in javascript as shell scripts that would run under node.js and simply dragged them into the Coda Plug-in Creator.

You can find it on GitHub here.

 
Posted in CoffeeScript | Tagged , , | Leave a comment

Coda vs WebStorm vs TextMate for CoffeeScript


I recently started rewriting all of my PhD tools in CoffeeScript from ActionScript. I’m loving the python-like syntax and I’m very happy with the language as well as the ecosystem around CoffeeScript and node.js. Tools are an important part of the experience so I’ve played around with several editors/IDE’s for writing CoffeeScript. Here is a quick comparison.

Coda

This is probably my favorite environment right now. I’m using a custom CoffeeScript.mode that I forked from Sean Durham to add navigator support.

Pros:

  • Nice clean interface written for the Mac
  • Syntax highlighting that works in both .coffee and CakeFile
  • Instant startup/shutdown and low resource usage. I can code on battery for 4+ hours on my MacBook Pro.
  • Preview mode for HTML.
  • Hackable. I already upgraded the 3rd party mode file. I have thoughts on plugins for running unit tests and reporting code coverage.

Cons:

  • Only subversion integration. Personally, I never use the source code integration built into editors and IDE’s so this is not a problem for me. But EVERYONE uses git for CoffeeScript. I just started using GitHub for Mac. I’ll see how that goes but I can always use the command line.
  • No code folding. Again, not a feature that I miss too much because I can put each class into a different file. The navigator is also a decent substitute.

WebStorm

Pros:

  • Refactoring.
  • NodeUnit support built in. Nice!!!
  • Keystrokes that make sense to me. Ctrl-Y means yank this line. It’s the only one of the three that does that.

Cons:

  • Still beta quality
    • The syntax highlighter doesn’t like multi-line comments “###”…”###” to have any “#” inside. Until they fix this, I’m not going to use it again.
    • My .idea folder got corrupted several times and each time it lost my configuration foo for running nodeunit tests.
  • Heavyweight. It’s essentially the IntelliJ IDE with java editing (and a bunch of other functionality) removed.

TextMate

I started writing CoffeeScript in TextMate and I still pull up .coffee files for a quick look in TextMate so this is my backup.

Pros:

  • Syntax highlighting that works. I think TextMate’s is the basis for the Coda one that I use.
  • Folder as a project. I like this concept and you can sorta do this in Coda but it’s nice to open one project/folder with tabs for all of the files that are part of that project but still be able to open separate windows for other files that I randomly need to edit. If you right-click…open in Coda, it’ll create another tab in the “workspace” that you are working in even if the file has nothing to do with that project.

Cons:

  • None really other than it missing some of the Pros of Coda.
 
Posted in CoffeeScript, Software craftsmanship | Tagged , , , | Leave a comment

The defect 4-step



What to do when you find a defect

The defect 4-step is not a new dance craze. It’s a way to accomplish organizational learning from the opportunity provided by a defect. So, here is what you should do when you find a defect:

  1. Fix it.
  2. Find and fix others “like it”. “Like it” could be along several dimensions and at multiple levels of abstraction (see below). Code query can really help here but sometimes a manual analysis can be effective.
  3. Prevent future occurrences “like it”…
    • from leaving the developers desk (preferable). This is often satisfied with some from of static analysis that runs during the build process or at the developers desk (think, upgrading the compiler). Frequently, the best you can do is share the bad news or conduct training so developers know what patterns lead to defects “like this”. Sometimes, you can change your technology or the programming paradigm to make the defect impossible. Switching from C to Java to avoid certain memory problems for example.
    • from getting into production (fallback). Tests that run overnight or manual tests that become part of the test plan are common approaches. Adding the meta-pattern(s) to a code review checklist can also help especially if it increases awareness and prevents defects “like it” from ever being written. Long-running overnight static analysis is sometimes in this category but it is preferable to run static analysis in the build process or on the desktop.

Why is this called a 4-step?

When I originally created this post, I had the two prevention “steps” broken out. In reality, you generally only do one or the other “prevent” steps. So, I guess I could rename this the defect 3-step.

What does “like it” mean?

The definition of “like it” for any given defect might be along several different dimensions and at multiple levels of abstraction. An example is the best way to illustrate this. We recently had a “stop-the-release” P1 defect found on the Friday before a Saturday release. An entire part of the application didn’t work in certain web browsers. There was an extra trailing comma added to the javascript that most browsers ignore but some complain about as a parsing error. Normal testing includes those browsers but didn’t catch it because testing was done on a version before the defect was injected. The definition of “like it” in this circumstance is along two different dimensions. One dimension of “like it” might try to address the need to make sure testing occurs on the latest version. A second dimension of “like it” might target the trailing comma pattern. The simple answer in this case was to implement JSLint, a static analysis tool for javascript. This is a great solution because it is at a higher level of abstraction from merely fixing a trailing comma defect. JSLint can be used to mitigate against ANY javascript parse error. It’s often more work to address the higher level of abstraction but it’s also usually more valuable. This trade-off decision should be made on a case-by-case basis but higher levels of abstraction should generally be favored. Testing is rarely able to climb the abstraction ladder so static analysis is favored.

 
Posted in Software craftsmanship | 1 Comment

How can an Agile team influence the quality of upstream components


Q: I would be interested in anything you might have about rolling out Agile on a team that depends on components created by non-Agile teams. In particular, how that is affected by different approaches to Quality between teams in the same company (and no, we don’t have a common standard, only on paper). I was reading about the GM/Toyota experiment (first TPS plant in the US) and how GM had issues trying to roll it to other plants, one of the biggest one being the fact that unlike in Japan, they didn’t have the power to push their process down to their producers. They quickly found out that they could not build a Quality car without Quality components; and I am afraid we will find the same here.

A: The reason quality is generally higher with the output of Agile processes is related to the nature of feedback loops built into Agile. We feedback on the product/design much more rapidly. Practices like pair programming or lightweight peer review, automated testing, short iterations, automated build/continuous integration, and close collaboration with the customer/proxy, all tend to give us more feedback on the product/design… which tends to lead to higher quality. My recommendation would be to try to drive as much of those feedback loops up stream as possible. You don’t control their process but you may be able to influence it at the boundary between them and you.

Close collaboration.
The lowest hanging fruit is probably the close collaboration with the customer one. In this instance, the Agile team is the customer. The non-Agile teams are the vendors. I’m thinking of setting up regular demo/review meetings (probably on the cadence of the Agile team – short iterations). You may also be able to visit (virtually or physically) on a near daily basis.

Automated testing. You might also try setting up automated testing at the interface level for the components delivered by the upstream teams. You’ll have to avoid the trap of using this as “contract negotiation over collaboration” but that is in how you handle it. The key here is that you want them to think of the tests as a tool to help them do their job as opposed to a way to enforce something. This means that they will need ability to run the tests before delivering to you. It would be better still if they owned the tests and you reviewed them. No matter who owns them, the tests become the specification for the API, which is a good Agile smell.

Peer review. At this point, you are collaborating/reviewing the test code. This might then lead to a situation where you might be able to do peer review of their production code. I’d prefer a peer review approach that helped them improve their code (and learn how to write better code in the future) over one that just allowed you to fix their code after the fact.

Automated build. If you were to give them access to your build process, they would also be able to test the compile-time agreement between their code and yours. This comes with two immediate benefits: (1) it serves as an additional automated test of of the interface, and (2) this (combined with the other automated tests) gives them more confidence to refactor their code and make improvements. The assumption here is that most teams know their code has warts but they are afraid to modify it to improve it because they are afraid of breaking code that depends upon it. Running your build script lowers the fear.

There is a third (and potentially more powerful) benefit to a shared build process. It provides you with a place to plug in other quality improving tests and analysis. The automated testing that I proposed above are tests that run against their upstream code. With an automated build, you could include tests that run against your downstream (but higher level) code. This means that they could see if changes that they make break your higher level functionality. You’d have to use a stable version of your source so they could be sure the problem was theirs but a distributed source control tool or careful branch management could overcome that obstacle. The build is also a common place to run automated bug finders like FindBugs or even custom analysis like a tool to highlight any changes in the calling signature.

Please let me know if any of this helped. Maybe I can refactor and improve my answer (upstream product) based upon your feedback (from downstream). ;-)

 
Posted in Software craftsmanship | Tagged , , , | 2 Comments