Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Mark Thoma: "Tax Cuts Won't Build Schools"

Pro-cyclical arguments (the same ones that got us into this mess) are saying that we shouldn't begin public works projects because they might take a while. Mark Thoma shreds these them to little tiny bits.

I particularly enjoy how he notes that we've run a test of the "tax cuts solve everything" theory and how it's pretty clear that it failed. Miserably. While ballooning the deficit with nothing to show for it.

WebKit == Mobile

With the Pre/Mojo announcement, it's becoming clear that WebKit has mobile all sewn up. It bears listing who's betting on WebKit and where:

Apple
iPhone, Safari
Google
Android, Chrome
Nokia
Series 60 browser
Plam
WebOS
Adobe
AIR web runtime

From deep integration with the platform to being the platform, WebKit in various forms is how nearly every credible smartphone now "does" the web. The major outliers here are the WinCE devices, Blackberry, and whatever Sony's doing this week, but the writing is on the wall for them too. Mobile IE is a joke and the Blackberry succeeds in spite of its web experience. iPhone, Android, and Pre have raised the bar. The sucky-web center will not hold.

For a sizable chunk of the mobile browsers that a web developer would like to target today, that means that WebKit == Mobile. As predicted, the mobile world has beaten the desktop to the web of the future. It doesn't hurt that WebKit is making deep inroads into the desktop, either....but then you could reasonably assume that they pay me to say that.

So what does a WebKit-only world look like? And how portable are our desktop-web skills and tools? I did a quick set of experiments this past weekend to see how much of Dojo you'd still need and what we could leave behind. The results are encouraging. The headline numbers for dojo.js are roughly:

ShrinkSafe +gzip
Standard Build 79K 27K
webkitMobile=true 56K 20K
Savings 23K (29%) 7K (26%)

You can grab a copy of this pre-1.3.0 version here (webkitMobile.dojo.js).

The big size wins (in decreasing order) were:

  1. Moving to a QSA-only version of dojo.query()
  2. Being able to use intrinsics for dojo.forEach, etc.
  3. Dropping IE and FF-specific rendering, XHR, and style hacks
  4. Using a common closure wrapper for the entire core

And there's even more fruit on the vine. I think without too much more work I'll be able to drop the current animation system in favor of pure CSS animations and can significantly simplify the XHR code which doesn't do the straightforward thing to avoid terrible memory leaks on IE and very old versions of FF.

All of this points to where we could be if the browsers just got collectively awesome all of a sudden. That's the good news. The downside is that still leaves a lot of janktastic spec mistakes to be worked around at nearly every level of the platform. This experiment suggests that there is still a price to be paid just to get the platform to a usable state. If we look at the APIs of Dojo, Prototype, or jQuery as a set of suggestions for the APIs that the web should expose, then it becomes pretty clear that we've still got a long long way to go. But we can do at least 30% better in the short run, and I'm very glad of that.

My friends over at SitePen beat me to the punch on my own patches, but that's how it goes sometimes. Props to them for that.

Whoa.

Via Dion, Palm's new Mojo framework for the Pre is based on Dojo!

As far as I know, it was a total surprise to the Dojo community (myself included). I can't wait to get started writing apps for this thing and see what device APIs Palm has surfaced.

OSCON 2009 Call For Papers Is Open!

I'm a bit tardy on this, but the OSCON 2009 Call For Papers is now open.

In the past couple of years the shift from desktop-centric to a more web-centric OSCON has continued to make the conference useful and engaging, and great work on topics like JavaScript/Ajax performance, Dojo, Comet, and many of the emerging back-end bits of infrastructure that make it all go have made my yearly trip to Portland worthwhile.

Great talks have gotten lost from the JavaScript/web track in years past because they've missed the submission deadline, so if you're hacking on something fascinating, now's the time to get that proposal in. Make sure you flag it with the right track when you submit (javascript, ajax, web, or whatever they're calling it this year), and don't hesitate to ping me if you're unsure about whether or not your talk got slotted correctly for the review process.

Census 2: More Than Just A Pretty Graph

Benchmarks are hard, particularly for complex systems. As a result, the most hotly contested benchmarks tend not to be representative of what makes systems faster for real users. Does another 10% on TPC really matter to most web developers? And should we really pay any attention to how any JS VM does on synthetic language benchmarks?

Maybe.

These things matter only in regards to how well they represent end-user workloads and how trustworthy their findings are. The first is much harder than the second, and end-to-end benchmarking is pretty much the only way to get there. As a result, sites like Tom's Hardware focus on application-level benchmarks while still publishing "low level" numbers. Venerable test suites like SPECint have even moved toward running "full stack" style benchmarks which may emphasize a particular workload but are broad enough to capture the wider system effects which matter in the real world.

Marketing departments also like small, easily digestible, whole numbers. Saying something like "200% Faster!" sure sounds a lot better than "on a particular test which is part of a larger suite of tests, our system ran in X time vs. Y time for the competitor". Both may be true, but the second statement gives you some context. Preferably even that statement would occur above an actual table of numbers or graphs. Numbers without context are lies waiting to be repeated.

With all of this said, James Ward's Census benchmark makes a valiant stab at a full-stack test of data loading and rendering performance for RIA technologies. Last month Jared dug further into the numbers and found the methodology wanting, but given some IP issues couldn't patch the sources himself. Since I wasn't encumbered in the same way I thought I might as well try my hand at it, but after hours of attempting to get the sources to build, I finally gave up and decided to re-write the tests. The result is Census 2.

There are several goals of this re-write:

The results so far have been instructive. On smaller data sets HTML wins hands-down for time-to-render, even despite its disadvantage in over-the-wire size. For massive data sets, pagination saves even the most feature-packed of RIA Grids, allowing the Dojo Grid to best even XSLT and a more compact JSON syntax. Of similar interest is the delta between page cycle times on modern browsers vs their predecessors. Flex can have a relatively even performance curve over host browsers, but the difference between browsers today is simply stunning.

Given the lack of an out-of-the-box paginating data store for Flex, RIAs built on that stack seem beholden to either Adobe's LCDS licensing or are left to build ad-hoc pagination into apps by hand to get reasonable performance for data-rich business applications. James Ward has already exchanged some mail with me on this topic and it's my hope that we can show how to do pagination in Flex without needing LCDS in the near future.

The tests aren't complete. There's still work to do to get some of the SOAP and AMF tests working again. If you have ideas about how to get this done w/o introducing a gigantic harball of a Java toolchain, I'm all ears. Also on the TODO list is an AppEngine app for recording and analyzing test runs so that we can say something interesting about performance on various browsers.

Census 2 is very much an open source project and so if you'd like to get your library or technology tested, please don't hesitate to send me mail or, better yet, attach patches to the Bug Tracker.

Update: I failed to mention earlier that one of the largest changes in C2 vs. Census is that we report full page cycle times. Instead of reporting just the "internal" timings of an RIA which has been fully boostrapped, the full page times report the full time from page loading to when the output is responsive to user action. This keeps JavaScript frameworks (or even Flex) from omitting from the reports the price that users pay to download their (often sizable) infrastructure. There's more work to do in reporting overall sizes and times ("bandwidth" numbers don't report gzipped sizes, e.g.), but if you want the skinny on real performance, scroll down to the red bars. That's where the action is.

Older Posts

Newer Posts