I finally blogged the work I've been doing on dojo.query. As you can guess, it's a CSS query engine for Dojo, and as you can probably also guess, it's fast. Or at least as fast as you can make a system that needs to hoist itself by its own petard thanks to the network latency of sending the darned thing down the wire every time. As web developers continue to expect ever more out of their tools, it's becoming clear to me that the dynamics of the "Ajax platform" are a deck that's heavily stacked against novices to the advantage of experts. For a long time we've been trying with Dojo to help tilt that balance back in favor of those who haven't co-evolved their skills to suit the peccadilloes and vagaries of the web. But I don't know that we (or anyone else) is really succeeding at it. The "Other Language to JavaScript" compiler crowd is building a very leaky abstraction that is going to take years to get right. Meanwhile, folks trying to do "something interesting" tend to look at the current state of play in pure-browser apps and run headlong into the arms of Macrodobe or Microsoft. Flex is a surprisingly easy sell for organizations that don't have the health of the web in mind.
Ajax is getting us "old apps ++", but even with Comet in the stew, too much complexity is still laid at the feet of end developers. Complexity == cost. Cost == opportunity. Is it any wonder that the most interesting startups today are either retreads of old concepts or simple integration of new data paths into the old ones?
So long as it's still this hard for everyone to build web apps that don't contain nasty surprises, the web's formidable advantage in openness will always be at risk of being dominated by other factors. As the Linux world has painfully discovered, being open isn't a guarantee of value to everyone you wish to entice. It's an opening gambit in convincing people of top-to-bottom value and not even a strong one.
I'm starting to focus more and more on the "sharp edges" of the web development experience because I find that web developers discount mental overhead and workflow impediments without much thought to the real costs. Jot's design hit a chord with me precisely because it rounded off some of those sharp edges: the hours of database and web server setup. It's those kinds of assumed costs that keep tripping us up. They're fine for organizations that want to be experts or think they can get competitive advantage out of it, but how is it good for everyone else?
One of the big goals with dojo.query wasn't only to be relatively fast compared to the other query engines that you can chose from, but it's to help even out the performance peaks and valleys. It's still possible to write poorly performing queries with dojo.query, but for the most part you're going to have to do more typing to get a slow query than a fast one. It may not be the kind of thing that webdevs will notice, per sae, but rounding off the sharp edges is an exercise in usability: things are only useable when they do what you expect them to. A system that hurts you more than you expect isn't useable.
I'm not sure that I have a point with all of this other than to hope that others will either point out a great big hole in my logic or, like me, start to work on rounding off those edges. If I'm right, the web needs us to question our sacred cows and continually sunk costs. Openness might not have strong value on an individual basis, but in the aggregate it's important for everyone.
Jennifer and I were lucky enough to be able to visit Hong Kong for the last week and change, and I've gotta say that it's one of the best places I've visited. On the way there, the camera we brought broke, and so while we picked up another one, all the pictures we took are on Jennifer's Flickr stream.
We had a great time and Jennifer's talk was very well received. Getting around Hong Kong is surprisingly simple given how tortured the street layouts are. The service at hotels there is like nothing else, and the city is breath-taking in scale. We had some of the best thai, cantonese, and dim sum we've ever tasted, thanks in large part to a little guide book we picked up in a book shop in Kawloon. Also, if you get to Hong Kong and you need your daily dose of dorkyness, check out the Science Museum. Their 4-story-tall Rube Goldberg device is something you don't easily forget.
If you were trying to get ahold of me in the last couple of weeks, I'll try my best to get back to you soon.
So there's a new MSIE web developer toolbar out-and-about. Like the last version, there's a lot of good stuff in there. It's not Firebug, but nothing else is. Anyway, it does lots of stuff you want, in particular, it replicates the fast-path for cache clearing that the Mozilla Web Developer Toolbar (that the MSIE one is a clone of) provides and goes it one better by promoting it to a top-level button.
And then it gives you a confirmation dialog. I shit you not.
Removing one click: good.
Leaving another (useless and impossible to disable) click in the web-dev fast path for a non-destructive operation (it's just a cache after all): stupid.
As things ramp up for the Comet Developer Day and the Dojo Developer Day(s), there's been a ton of activity in Dojo-land. Here's a selected sample:
- SitePen begins to offer Dojo Training to the public! Signup here. Since we employ a huge percentage of the committer base and fund significant new development on the toolkit you'll know you're getting it "from the horse's mouth".
- Shane O'Sullivan of IBM has made a first public release of his GUI Dojo build tool.
- The second part in the SitePen performance blog series is up (part 1).
- Dojo 0.4.1 blew past the 100K download mark (now at ~180K) and continues its march to becoming the most successful Dojo release ever
- And not to tease too much, but there's stuff coming down the pike that I'm tremendously excited about. More news on that after 3D2.
The Dojo Bayeux client implements a bunch of different "transports" and tries to pick the right one based on what the browser can support, the cross-domain requirements, and so forth. When we started down this path, most of the reason for doing this was to implement both the forever-frame and long-polling styles of Comet as well as providing a platform to experiment with alternate transport types (e.g., Flash sockets). One of the most promising of these experiments took advantage of the multipart mime support that's been tucked away in the Mozilla codebase for quite a while. What follows is one of those stories that makes people assume that I'm crazy to do what I do for a living. They might be right.
Multipart is attractive because it provides a way of avoiding TCP set-up and tear-down for each and every event across the channel. While it's not significant overhead (comparatively), being able to also reduce the number of HTTP header blocks sent can also help out when it comes to wringing latency out of the system. The code indicates that multipart is supported on Safari and Mozilla, but while events are indeed delivered at the right times on Safari, you can't get at the payload until the connection closes completely. Not useful.
Things were looking better on Firefox and it was the preferred transport type in the Dojo client, but I think that's going to have to change. Sadly, it seems we can't actually tell when a multipart connection has failed. In "normal" XHR requests, the 200 HTTP status code plus a "finished" readystate indicates that the contents of the request can be read and control handed back. In the multipart case, each successful block fires of a load handler and resets the readystate. That means that the combination of readystate and and status can't be used to differentiate between block success and connection success. Making matters worse, server-side connection failure doesn't fire any kind of readystatechange handler, and even if it did, it doesn't appear to be possible to determine if the connection is closed from any of the public properties on the object.
So, OK, what about falling back to a timer that restarts the connection every N seconds for good measure? This might work in cases like failover where a lag of 10 to 30 seconds might be acceptable but not for normal operation. Should events be flow regularly, it might never be necessary to hit this "backstop". Not great, but I gave it a try, only to discover that Firefox won't give you responseText of an XHR request if the connection is marked as multipart but the response isn't a 200 and wrapped in a multipart block. Since we're trying to use HTTP status codes correctly and keep the server internals from needing to fork significantly for each pluggable transport, it's something of a step backward to need this kind of hand-holding.
I'd still like to support the multipart transport type, but until at least one of the implementations becomes rational for use from the XHR object, I think I'm going to just be commenting this transport out in cometd.js
. Like XHR itself, it's one to mark down for resurrection a year or two from now. At least we still have good enough options in the mean time.
Older Posts
Newer Posts