Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

What's Missing in Deer Park

The Mozilla has released Alpha 2 of the next release of Firefox, and there's a lot of good stuff in it. On the whole, it feels faster, and tons of exciting stuff has already landed in this release.

Aside from E4X, it's hard to see how Deer Park is going to make my life much better as a DHTML developer, though. The story on rich-text editing is still grim. <canvas> is a disaster whose impact is only lessened by the inclusion of a minimal SVG subset. The platform is overall improving, but XUL apps still require enduring an RDF mind-fuck in order to get off the ground. Overall, you get the feeling that Deer Park is 2 steps sideways, one step forward.

But it wouldn't take much to change that from the perspective of the DHTML, nee, "Ajax" developer. Here are a couple of simple-ish things that would make our lives immeasurably better:

  1. Local string caching from script

    This is a strange name for a very very powerful capability: caching. Webdevs today need a coherent, simple, and workable local cache which can be accessed form script like any other object/hash and which stores only strings. Such a cache needs only a couple of properties to be wildly successful. Firstly, use the exact same-domain policy as cookies currently are subject to. Secondly, tell me (from script) how big the cache is and how much space is still free. Thirdly, let us specify expiry and flushing policies for it from script (not via HTTP headers). But please, whatever you do, do NOT over-think this problem.
  2. Event-transparency in the z-order and positioned event generation

    Allow developers to specify that a particular DOM node is "transparent" to DOM events. Which is to say, it passes them "down" the z-index stack without triggering a dispatch or bubble sequence in any way. This should probably be handled by a "-moz-" prefixed CSS attribute. Related to this, let us generate DOM events at a set of coordinates without knowing ahead of time what the node we want to route the event to happens to be. Both of these capabilities are amazingly important for a whole set of graphical apps that Deer Park is about to make available.
  3. Improve Venkman's profiling output

    Venkman is the most powerful tool in the professional DHTML developer's toolkit today, but it's crippled when it comes to outputting useful data from it's profiling tool. Either try harder to assign names to anonymous functions or at least make the output something that can be loaded by gprof. The current situation is amazingly painful, but since it's the only game in town, we live with it.
I know they don't sound like big features, but they're tremendously important to the kinds of rich applications that Deer Park is supposed to be enabling. The big push now is to keep Microsoft from breaking the web with XAML, and I think we all know it. Small, obviously beneficial changes like these go a long way toward maintaining and improving the viability of the open web as a platform.

Toward server-sent data w/o iframes

So this past week at OSCON, I had some discussions with people around what comes after XMLHTTP for sending data to and from web browsers with low latency. Traditional polling schemes really really suck, and the alternatives are all giant hacks. Having re-written the mod_pubsub JavaScript client last year, I'm pretty familiar with what you can and can't do to and from a browser.

Or at least I thought I was.

At OSCON, I fortuitously meet Johnny Stenback, who informed me that you could indeed get multiple replies from a single request under Mozilla after I'd stated otherwise in a session. Today I started to dig a bit based on his direction and the results are interesting.

But they're not really interesting if you don't have the back story. Previously, I had thought that the only way to keep a connection open and do multiple things with it in a cross browser way was to rely on an incremental rendering hack (which is how the mod_pubsub client works). In this scenario, the client opens up an iframe and points it at a special page that doesn't close the connection. When the server has data, it synthesizes a <script> block which gets sent to the client (sometimes with some padding), which then evaluates it thanks to a "partial rendering" behavior that seems to be part of every browser since time began. The script loaded this way then calls back up to the parent page (on a special function) which then disseminates the event data to everything that had previously registered it's interest. This works pretty well, but falls into the well populated "giant, gratuitous hack" category of useful techniques for doing things in browsers.

Something better would be much appreciated. For a long time, DHTML hackers have eyed Microsoft's XMLHTTP docs with some envy as there is a stream type and an interactive mode which aren't made available to scripting, and there hasn't even been that much promised under Mozilla et. al. So back to iframe hacks we fell. But things are looking up.

It seems that it's now possible to send chunked data to the client in Mozilla on a long-lived connection, and it might very well be possible to do the same thing for IE.

Unfortunantly, unlike the iframe hack, it looks like the format on the wire will be different between the browsers (and things like Safari might still require iframes, which is yet another code branch). Whether or not any of this is a better trade-off than doing something like LivePage or the current mod_pubsub implementation is still up for debate, and only trial-and-error will really tell.

Of course, the biggest hurdle is still the scalability of servers for long-lived "zombie" connections, which things like Java servlet containers completely melt under, but Twisted Python (which mod_pubsub is now based on) indicate the way forward. I think it won't be too long before real-time monitoring and chat features turn into commonplace occurrences in the Web2.0 experience.

It'll just take another set of sexy apps to point the way forward. And a lot of elbow grease.

Misc., etc.

I'm back in San Francisco after 7 days in Portland, and it's good to be home. Dylan and I blogged some of our observations from the conference over at the brand-spanking-new Dojo blog.

We'll be using the Dojo blog as a place to keep people informed about what's going on with the project, including major announcements like incompatible API changes and the like.

OSCON was great. The folks at ORA really know how to run a conference, and Portland is a great city. I got to meet lots of really wonderful people who are doing exciting stuff. It seems to me now that the biggest problem for me is going to be trying to follow up on all the work and connections that came out of the conference.


So I've been in Portland for 2 days now, attending a FLOSS "summit" thinger for many of the various Open Source foundations that are working to help hackers keep their sanity by doing the dirty work that committers don't want to. The conversations have been interesting, informative, and eye-opening. I feel amazingly lucky to have been invited.

The main OSCON stuff starts tomorrow, and I'm still hacking away at the examples to go with my tutorial on Tuesday. Don't even ask about Friday's talk. Ugg. But I'm still terribly excited. A lot of people are going to be showing off some amazingly neat apps built on Dojo this week, and while I won't steal any of their thunder, I can vouch for the kick-ass-ness of these things.

Frankly, I'm shocked at what people are doing w/ the platform, esp given that we don't have coherent docs. More on that later.

No rest for the unprepared

Hot on the heels of OSCON, I'm going to be speaking at SDForum's Emereging Tech Sig. The talk will be more Dojo-specific than the others, so if you're in the bay area and have been wondering how in the world this stuff can help you and your apps, this is as good a chance as any to pepper me with questions.

Hope to see you there!

Older Posts

Newer Posts