Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Dojo: Twice As Fast When It Matters Most

Some folks have noticed a new landing page for dojotoolkit.org, one that includes hard numbers about the performance of Dojo vs. jQuery. Every library makes tradeoffs for speed in order to provide better APIs, but JavaScript toolkit performance shootouts obscure that reality more often than not. After all, there would hardly be a need for toolkits if the built in APIs were livable. Our new site isn't arguing that Dojo gives you the fastest possible way to do each of the tasks in the benchmark, all we argue is that we provide the fastest implementation that you'll love using.

Smaller is better.

I gathered the numbers and stand behind them, so let me quickly outline where they come from, why they're fair, and why they matter to your app.

I took the average of three separate runs of the TaskSpeed benchmark in comparing the latest versions of both Dojo and jQuery. The numbers were collected on isolated VM's on a system doing little else. You may not be able to reproduce the exact numbers, but across a similar set of runs, the relative timings should be representative.

So why is TaskSpeed a fair measuring stick? First, it does representative tasks and the runtime harness is calibrated to ensure statistically significant results. Secondly, the versions of the code for each library are written by the library authors themselves. The Dojo team contributed the Dojo versions of the baseline tasks and the jQuery team contributed theirs. If any library wants to take issue with the tests or the results, they only need to send Pete a patch. Lastly, the tests run in relative isolation in iframes. This isn't bulletproof -- GC interactions can do strange things and I've argued for longer runs -- but it's pretty good as these things go. I took averages of multiple runs in part to hedge against these problems.

The comparison to jQuery is fair on the basis of syntax and market share. If you compare the syntax used for Dojo's tests with the jQuery versions, you'll see that they're similarly terse and provide analogous conveniences for DOM manipulation, but the Dojo versions lose the brevity race in a few places. That's the price of speed, and TaskSpeed makes those design decisions clear. As for market share, I'll let John do the talking. It would be foolish of me to suggest that we should be comparing Dojo to some other library without simultaneously suggesting that his market share numbers are wrong; and I doubt they are.

Given all of that, do the TaskSpeed numbers actually matter for application performance? I argue that they do for two reasons. First, TaskSpeed is explicitly designed to capture common-case web development tasks. You might argue that the weightings should be different (a discussion I'd like to see happen more openly), but it's much harder to argue that the tests do things that real applications don't. Because the toolkit teams contributed the test implementations, they provide a view to how developers should approach a task using a particular library. It's also reasonable to suspect that they demonstrate the fastest way in each library to accomplish each task. It's a benchmark, after all. This dynamic makes plain the tradeoffs between speed and convenience in API design, leaving you to make informed decisions based on the costs and benefits of convenience. The APIs, after all, are the mast your application will be lashed to.

I encourage you to go run the numbers for yourself, investigate each library's contributed tests to get a sense for the syntax that each encourages, and get involved in making the benchmarks and the libraries better. That's the approach that the Dojo team has taken, and one that continues to pay off for Dojo's users in the form of deliberately designed APIs and tremendous performance.