Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

The Browser Wars: A Style Guide

Dear Tech Journalist and/or Editor:

Thank you for covering the browser market. Many users don't understand that they have a choice of browser and by discussing the alternatives you help promote a healthy ecosystem and honest competition. In covering this important topic it's easy to be loose with terms, but some shortcuts cross a bridge too far. A few are listed here here along with a rubric to help you understand why they make you (and your esteemed publication) seem less interested in hard facts than I'm sure you are.

"JavaScript rendering"
As I'm sure you know, JavaScript (aka "ECMAScript", aka "JScript") is a programming language, not a UI toolkit or rendering technology. Yes, JavaScript drives the UI of many modern web apps like GMail and Google Maps, but it does so through a technology called DOM. DOM is not a part of JavaScript, it is instead bolted on to JavaScript by the browsers. "JavaScript rendering" would be a non-sensical thing to say even if you were describing the time it takes to build up a user interface. But I rarely (if ever) see such a story. Instead, this rhetorical abomination most often shows up in discussions of JavaScript benchmarks. These benchmarks work very hard to ensure that they aren't affected by any DOM or UI operations. They test everything but rendering. In your defense, there is a strong correlation between faster JavaScript execution and faster rendering. But they are not the same thing. Best to just stay out of this particular gutter.

Acceptable alternatives: "javascript execution", "javascript performance", "DOM rendering" (but only when discussing things that measure DOM performance).

"Plugin"
Strictly speaking, a browser plugin is a bit of native code (written in C or C++) that speaks a particular set of ActiveX and NPAPI interfaces and registers itself with browsers in a particular way. This definition might as well be written as "plugins are magic". The best known items of this class are Flash and Silverlight.

What you need to know is that there is an emerging class of things that users can install into their browsers which are similarly magical but which are not plugins. These things go by different names: "extensions", "add-ons", and (confusingly) "toolbars". I'm sure there will be others. You can think of these things as being interchangeable with each other but not with "plugins". So how do you tell which is which? A good rule of thumb is that if a web page works fine without you installing it, it's an extension. Otherwise, if you need to install something for the page to work, it's a plugin.

Acceptable alternatives: "extensions" (preferred), "add-ons", "toolbars" (overly specific, may confuse).

"HTML 5 support"
This is one for the nag file since you'll need to revisit this topic in the future. The important thing for now is to be cognizant that there isn't yet a real "HTML 5". Yes, there are various drafts, and yes, some browsers are doing a great job of implementing these new features ahead of formal standardization. But it's not done yet. Saying today that something is an "HTML 5 application" or that a browser has "HTML 5 support" will cause you problems. Nobody wants to explain how what was touted as being "standard" one day became "proprietary" the next. The safest course of action here is to simply talk about "the upcoming HTML 5 standard" or "advanced web applicatons". HTML 5 is a powerful brand and there's going to be an enormous amount of haggling over its meaning for years to come. Best that discussion not include references to your stories.

Regards,

Alex Russell

SPDY: The Web, Only Faster

Of all the exciting stuff that's happening at Google, one of the things I've been most excited about is SPDY, Mike Belshe and Roberto Peon's new protocol that upgrades HTTP to deal with many of the new use-cases that have strained browsers and web servers in the last couple of years.

There are some obvious advantages to SPDY; header compression means that things like cookies get gzipped, not just content, and mutliplexing over a single connection with priority information will allow clients and servers to cooperate to accelerate page layout based on what's important, not only what got requested first.

But the the really interesting stuff from my perspective is the way SPDY enables server push both for anticipated content and for handling Comet-style workloads. The first bit is likely to have the largest impact for the largest set of apps. Instead of trying to do things like embed images in data: URLs -- which punishes content by making it uncacheable -- SPDY allows the server to anticipate that the client will need some resource and preemptively begin sending it without changing the HTTP-level semantics of the request/response. The result is that even for non-cached requests, many fewer full round trips are required when servers are savvy about what the client will be asking for. Another way to think about it is that it allows the server to help pre-fill an empty cache. Application servers like RoR and Django can know enough about what resources a page is likely to require to begin sending them preemptively in a SPDY-enabled environment. The results in terms of how we optimize apps are nothing short of stunning. Today we work hard to tell browsers as early as possible that they'll need some chunk of CSS (per Steve's findings) and try to structure our JavaScript so that it starts up late in the game because the penalty for waiting on JS is so severe (blocked rendering, etc.). At the very edge of the envelope, this often means inlining CSS and accepting the penalty of not being able to cache for things that should likely be reusable across pages. On most sites, the next page looks a lot like the previous one, after all. When implemented well, SPDY will buy us a way out of this conundrum.

And then there are the implications for Comet workloads. First, SPDY multiplexes. One socket, many requests. Statefully. By default. Awwwww yeah. That means that a client that wants to hang on to an HTTP connection (long polling, "hanging GET", <term of the week here>) isn't penalized at the server since SPDY servers are expected to be handling stateful, long-lived connections. At an architectural level, SPDY forces the issue. No one will be fielding a SPDY server that doesn't handle Comet workloads out of the box because it'll often be harder to do so than not. SPDY finally brings servers into architectural alignment with how many clients want to use them.

Beyond that, SPDY allows clients to set priority information, meaning that real-time information that's likely to be small in size can take precedence on the wire over a large image request. Similarly, because it multiplexes, SPDY could be used as an encapsulation format for WebSockets, allowing one TCP socket to service multiple WebSockets. The efficiency gains here are pretty obvious: less TCP overhead and lowered potential for unintentional DoS (think portals with tons of widgets all making socket requests). There's going to need to be some further discussion about how to make new ideas like WebSockets work over SPDY, but the direction is both clear and promising. SPDY should enable a faster web both now and in the future.

A Bit of Closure

So from time to time I'd wondered what all the brilliant DHTML hackers that Google had hired were up to. Obviously, building products. Sure. But I knew these guys. They do infrastructure, not just kludges and one-off's. You don't build a product like Gmail and have no significant UI infrastructure to show for it.

Today they flung the doors open on Closure and it's supporting compiler. These tools evolved together, and it shows. Closure code eschews many of the space-saving shortcuts that Dojo code employs because the compiler is so sophisticated that it can shorten nearly all variables, eliminate dead code, and even do type inference (based on JSDoc comments and static analysis).

There's a ton of great code in Closure, so go give the docs a look and, if you're into that kind of thing, read the official blog post for a sense of what makes Closure so awesome.

It's interesting to me how much it feels like a more advanced version of Dojo in many ways. There's a familiar package system, the widgets are significantly more mature, and Julie and Ojan's Editor component rocks. The APIs will feel familiar (if verbose) to Dojo users, the class hierarchies seem natural, and Closure even uses Acme, the Dojo CSS selector engine. It's impressive work and congrats are in order for Arv, Dan, Emil, Attila, Nick, Julie, Ojan, and everyone else who worked so hard to build such an impressive system and fight to get it Open Source'd.

WebKit, Mobile, and Progress

PPK posted some great new compat tables for various flavors of WebKit-based browsers the other day, editorializing that:

...Acid 3 scores range from a complete fail to 100 out of 100.

This is not consistency; it’s thinly veiled chaos.

But I'm not convinced that the situation is nearly that bad.

The data doesn't reflect how fast the mobile market changes. The traditional difference between mobile and desktop, after all, has been that mobile is moving at all. If you figure a conservative 24 month average replacement cycle for smartphones, then the entire market for browsers turns over every two years. And that's the historical view. An increasing percentage of smartphone owners now receive regular software updates that provide new browsers even faster. What matters then is how old the WebKit version in a particular firmware is and how prevalant that firmware is in the real world. As usual, distribution and market share are what matters in determining real-world compatibility, and if that's a constantly changing secnario, the data should at least reflect how things are changing.

So what if we add a column to represent the vintage of the tested WebKit versions? Here's a slightly re-formatted version of PPK's summary data, separated by desktop/mobile and including rough WebKit vintages (corrections and new data much appreciated if you happen to know!):

Desktop
Browser Score (max 216) Vintage
Safari 4.0 204 2009
Chrome 3 192 2009
Chrome 2 188 Early 2009
Safari 3.1 159 2008
Chrome 1 153 Early 2008
Safari 3.0 108 2007
Konqueror 3.5.7 103 2007
Konqueror (newer, untested) 0 ??
Mobile
Browser Score (max 216) Vintage
Ozone (version?) 185 (?) Late 2009
iPhone 3.1 172 2009
Iris (version?) 163 (??) 2008
JIL Emulator (version?) 162 ??
Bolt (version?) 155 ??
iPhone 2.2 152 2008
Android G2 (version? 1.6?) 144 (??) Late 2008
Palm Pre (version?) 134 ??
Android G1 (1.5?) 108 (??) 2008
Series 60 v5 93 (??) 2008
Series 60 v3 (feature pack?) 45 2005

PPKs data is missing some other columns too, namely a rough estimate of the percent of mobile handsets running a particular version, rates of change in that landscape over the past 18 months, and whether or not these browsers are on the whole better than the deployed fleet of desktop browsers. Considering that web devs today still can't target everything in Acid2, knowing how the mobile world compares to desktops will provide some much-needed context for these valuable tables. Perhaps those are things that we as a community can chip in to help provide.

Even without all of that, just adding the rough vintages adds an arc to the story; one that's not nearly so glum and dreary. What we can see is that newer versions of WebKit are much more capable and compatible, even at the edges. None of PPK's data yet tests where the baseline is, so remember that the numbers presented mostly describe new-ish features on the platform. We also see clearly that the constraints of the mobile environment force some compromises vs. desktop browsers of the same lineage. This is all in line with what I'd expect from a world where:

The important takeaway for web developers in all of this is that WebKit is winning and that that is a good thing. The dynamics of the marketplace have thus far ensured that we don't get "stuck" the way we did on the desktop. That is real progress.

Where do we go from here? Given that the mobile marketplace is changing at a rate that's nearly unheard of on the desktop, I think that when new charts and comparisons are made, we'll need to couch them in terms of "how does this affect the difference in capabilities across the deployed base", rather than simply looking at instantaneous features. Mobile users are at once more likely tied to their OSes choice of browser and more likely to get a better browser sooner. That combination defies how we think about desktop browsers, so we'll need to add more context to get a reasonable view of the mobile world.

More Orthodox Heresy