Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Planet Chromium

All the Chromium news that I care about is now being aggregated at Planet Chromium, joining the similarly awesome Planet Webkit.

View-Source Follow-Up

One of the points I made during last Saturday's panel was that the further down the path we go with JavaScript, the more pressure there is to use it in ways that defeat view-source. Brendan called me out for seemingly not being aware of beautifiers which can help correct the imballance, but I think that response (while useful) is orthogonal to the thrust of my argument.

Indeed, I hadn't personally used the excellent jsbeautifier.org, instead using my own hacked-up copy of Rhino to beautify when I need to, but neither tool sufficiently reverses the sorts of optimizations being employed by GWT and the even more aggressive Closure Compiler. Far from the mere obsfucation of ShrinkSafe and its brethren, these new compilers apply multi-pass AST-based transforms on an entire body of code, performing per-browser optimizations, type inference and annotation based optimizations, loop invariant hoisting, function inlining, dead-code removal, and global code motion optimizations that produce code different not only in style but in flow of control. The results are nothing short of stunning. The Closure Compiler can deliver code that's much, much smaller than I can wring out by hand and that performs better to boot. It's also totally unrecognizable. De-obsfucators have no hope in this strange land -- brand new tools akin to symbol servers and WinDbg-style debuggers are needed to work with the output in a meaningful way. I argued in the panel and in the comments of my last post on the topic that when we get to this place with JavaScript the product is functionally indistinguishable from a bytecode format like SWF or Java and the effects on the learning-lab nature of the open web are the same: less ability to easily share techniques, a smaller group of more professional users, and a heavier reliance on tooling for generating content.

If we assume that the furthest down the code-centric path we'll go are the Dojo and JQuery style augmentations of existing content, then a simple de-obsfucator is sufficient. But I'm afraid the current generation of high-end sites and apps points in a different direction, one that is less hopeful, and one that implies to a greater extent that the browsers must lead the way out of the wilderness by creating new tags and CSS properties to help re-democratize the process of creating applications. We've already seen the alternatives, and while they may be elegant, they lack leverage and the second-order beneficial effects that have made the web as successful as it is today.

If HTML is just another bytecode container and rendering runtime, we'll have lost part of what made the web special, and I'm afraid HTML will lose to other formats by willingly giving up its differentiators and playing on their turf. Who knows, it might turn out well, but it's not a vision of the web that holds my interest.

SxSWi '10 Reflections

I first attended SxSWi amidst the rubble of the dot-com crash, a time when the interactive festival filled only one hallway of the third floor of the Austin Convention Center. It's changed a lot since then, mostly in scale.

The lack of technical content is something I've bemoaned in years past but have finally come to accept. I was grateful to be on an excellent panel this year that touched on some topics that I both think and care a lot about. Our panel was also blessed with amazing audience engagement from people I respect. I chalk most of that up to Michael Lucaccini and Chris Bernard's excellent prep and panelist selection. Any panel with Chris Wilson and Aza Raskin on it is going to be good.

The explosion of SxSWi has not been a good thing and I went in the hopes that contraction had started as the economic disaster crimps budgets. Guess not. SxSWi was bigger than either the music or movie portions of the conference for the first time this year. Others have commented insightfully on the problems of scale, so I'll spare you the rundown of what makes an enormous conference uninviting, but suffice to say it seems like SxSWi has gone over some crucial limit and will continue its inexorable expansion until something gives in a dramatic way. Gravity cannot be reasoned with.

Unlike some of those who found themselves post-hoc disappointed, I really didn't think there was much chance of having a good time. Luckily I was wrong -- not so much because it suddenly got better than in '08, the last time I went -- but because I had learned how to cope better with the scale. My brother lives near Austin and getting to hang out with him made the entire experience better. I also employed a series of strategies that helped me have an experience that I'd gladly repeat:

I think all of this implies that folks who haven't been to SxSWi before aren't going to be able to have the same sort of open, trusting experience that I had when I first started attending, and that's a real loss; but at least I now feel like I can go and have a good and productive time. I'm grateful to have gone this year and I'm looking forward to next year already.

dojo.connect: Online Dojo Conference, Feb 10-12

Its been a rough year (or two) in the tech industry, and conference budgets aren't what they once were. Dustin Machi's doing his bit to keep the Dojo community connected by starting a fully virtual set of conferences, the first of which is 3 days full of Dojo goodness -- dojo.connect.

I'll be there virtually and I hope you can join us. The lineup is spectacular, and I can't think of a more concentrated way to get in touch with the community short of becoming a committer.

Dojo: Twice As Fast When It Matters Most

Some folks have noticed a new landing page for dojotoolkit.org, one that includes hard numbers about the performance of Dojo vs. jQuery. Every library makes tradeoffs for speed in order to provide better APIs, but JavaScript toolkit performance shootouts obscure that reality more often than not. After all, there would hardly be a need for toolkits if the built in APIs were livable. Our new site isn't arguing that Dojo gives you the fastest possible way to do each of the tasks in the benchmark, all we argue is that we provide the fastest implementation that you'll love using.

Smaller is better.

I gathered the numbers and stand behind them, so let me quickly outline where they come from, why they're fair, and why they matter to your app.

I took the average of three separate runs of the TaskSpeed benchmark in comparing the latest versions of both Dojo and jQuery. The numbers were collected on isolated VM's on a system doing little else. You may not be able to reproduce the exact numbers, but across a similar set of runs, the relative timings should be representative.

So why is TaskSpeed a fair measuring stick? First, it does representative tasks and the runtime harness is calibrated to ensure statistically significant results. Secondly, the versions of the code for each library are written by the library authors themselves. The Dojo team contributed the Dojo versions of the baseline tasks and the jQuery team contributed theirs. If any library wants to take issue with the tests or the results, they only need to send Pete a patch. Lastly, the tests run in relative isolation in iframes. This isn't bulletproof -- GC interactions can do strange things and I've argued for longer runs -- but it's pretty good as these things go. I took averages of multiple runs in part to hedge against these problems.

The comparison to jQuery is fair on the basis of syntax and market share. If you compare the syntax used for Dojo's tests with the jQuery versions, you'll see that they're similarly terse and provide analogous conveniences for DOM manipulation, but the Dojo versions lose the brevity race in a few places. That's the price of speed, and TaskSpeed makes those design decisions clear. As for market share, I'll let John do the talking. It would be foolish of me to suggest that we should be comparing Dojo to some other library without simultaneously suggesting that his market share numbers are wrong; and I doubt they are.

Given all of that, do the TaskSpeed numbers actually matter for application performance? I argue that they do for two reasons. First, TaskSpeed is explicitly designed to capture common-case web development tasks. You might argue that the weightings should be different (a discussion I'd like to see happen more openly), but it's much harder to argue that the tests do things that real applications don't. Because the toolkit teams contributed the test implementations, they provide a view to how developers should approach a task using a particular library. It's also reasonable to suspect that they demonstrate the fastest way in each library to accomplish each task. It's a benchmark, after all. This dynamic makes plain the tradeoffs between speed and convenience in API design, leaving you to make informed decisions based on the costs and benefits of convenience. The APIs, after all, are the mast your application will be lashed to.

I encourage you to go run the numbers for yourself, investigate each library's contributed tests to get a sense for the syntax that each encourages, and get involved in making the benchmarks and the libraries better. That's the approach that the Dojo team has taken, and one that continues to pay off for Dojo's users in the form of deliberately designed APIs and tremendous performance.

Older Posts

Newer Posts