Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

View-Source Follow-Up

One of the points I made during last Saturday's panel was that the further down the path we go with JavaScript, the more pressure there is to use it in ways that defeat view-source. Brendan called me out for seemingly not being aware of beautifiers which can help correct the imballance, but I think that response (while useful) is orthogonal to the thrust of my argument.

Indeed, I hadn't personally used the excellent, instead using my own hacked-up copy of Rhino to beautify when I need to, but neither tool sufficiently reverses the sorts of optimizations being employed by GWT and the even more aggressive Closure Compiler. Far from the mere obsfucation of ShrinkSafe and its brethren, these new compilers apply multi-pass AST-based transforms on an entire body of code, performing per-browser optimizations, type inference and annotation based optimizations, loop invariant hoisting, function inlining, dead-code removal, and global code motion optimizations that produce code different not only in style but in flow of control. The results are nothing short of stunning. The Closure Compiler can deliver code that's much, much smaller than I can wring out by hand and that performs better to boot. It's also totally unrecognizable. De-obsfucators have no hope in this strange land -- brand new tools akin to symbol servers and WinDbg-style debuggers are needed to work with the output in a meaningful way. I argued in the panel and in the comments of my last post on the topic that when we get to this place with JavaScript the product is functionally indistinguishable from a bytecode format like SWF or Java and the effects on the learning-lab nature of the open web are the same: less ability to easily share techniques, a smaller group of more professional users, and a heavier reliance on tooling for generating content.

If we assume that the furthest down the code-centric path we'll go are the Dojo and JQuery style augmentations of existing content, then a simple de-obsfucator is sufficient. But I'm afraid the current generation of high-end sites and apps points in a different direction, one that is less hopeful, and one that implies to a greater extent that the browsers must lead the way out of the wilderness by creating new tags and CSS properties to help re-democratize the process of creating applications. We've already seen the alternatives, and while they may be elegant, they lack leverage and the second-order beneficial effects that have made the web as successful as it is today.

If HTML is just another bytecode container and rendering runtime, we'll have lost part of what made the web special, and I'm afraid HTML will lose to other formats by willingly giving up its differentiators and playing on their turf. Who knows, it might turn out well, but it's not a vision of the web that holds my interest.

SxSWi '10 Reflections

I first attended SxSWi amidst the rubble of the dot-com crash, a time when the interactive festival filled only one hallway of the third floor of the Austin Convention Center. It's changed a lot since then, mostly in scale.

The lack of technical content is something I've bemoaned in years past but have finally come to accept. I was grateful to be on an excellent panel this year that touched on some topics that I both think and care a lot about. Our panel was also blessed with amazing audience engagement from people I respect. I chalk most of that up to Michael Lucaccini and Chris Bernard's excellent prep and panelist selection. Any panel with Chris Wilson and Aza Raskin on it is going to be good.

The explosion of SxSWi has not been a good thing and I went in the hopes that contraction had started as the economic disaster crimps budgets. Guess not. SxSWi was bigger than either the music or movie portions of the conference for the first time this year. Others have commented insightfully on the problems of scale, so I'll spare you the rundown of what makes an enormous conference uninviting, but suffice to say it seems like SxSWi has gone over some crucial limit and will continue its inexorable expansion until something gives in a dramatic way. Gravity cannot be reasoned with.

Unlike some of those who found themselves post-hoc disappointed, I really didn't think there was much chance of having a good time. Luckily I was wrong -- not so much because it suddenly got better than in '08, the last time I went -- but because I had learned how to cope better with the scale. My brother lives near Austin and getting to hang out with him made the entire experience better. I also employed a series of strategies that helped me have an experience that I'd gladly repeat:

I think all of this implies that folks who haven't been to SxSWi before aren't going to be able to have the same sort of open, trusting experience that I had when I first started attending, and that's a real loss; but at least I now feel like I can go and have a good and productive time. I'm grateful to have gone this year and I'm looking forward to next year already.

dojo.connect: Online Dojo Conference, Feb 10-12

Its been a rough year (or two) in the tech industry, and conference budgets aren't what they once were. Dustin Machi's doing his bit to keep the Dojo community connected by starting a fully virtual set of conferences, the first of which is 3 days full of Dojo goodness -- dojo.connect.

I'll be there virtually and I hope you can join us. The lineup is spectacular, and I can't think of a more concentrated way to get in touch with the community short of becoming a committer.

Dojo: Twice As Fast When It Matters Most

Some folks have noticed a new landing page for, one that includes hard numbers about the performance of Dojo vs. jQuery. Every library makes tradeoffs for speed in order to provide better APIs, but JavaScript toolkit performance shootouts obscure that reality more often than not. After all, there would hardly be a need for toolkits if the built in APIs were livable. Our new site isn't arguing that Dojo gives you the fastest possible way to do each of the tasks in the benchmark, all we argue is that we provide the fastest implementation that you'll love using.

Smaller is better.

I gathered the numbers and stand behind them, so let me quickly outline where they come from, why they're fair, and why they matter to your app.

I took the average of three separate runs of the TaskSpeed benchmark in comparing the latest versions of both Dojo and jQuery. The numbers were collected on isolated VM's on a system doing little else. You may not be able to reproduce the exact numbers, but across a similar set of runs, the relative timings should be representative.

So why is TaskSpeed a fair measuring stick? First, it does representative tasks and the runtime harness is calibrated to ensure statistically significant results. Secondly, the versions of the code for each library are written by the library authors themselves. The Dojo team contributed the Dojo versions of the baseline tasks and the jQuery team contributed theirs. If any library wants to take issue with the tests or the results, they only need to send Pete a patch. Lastly, the tests run in relative isolation in iframes. This isn't bulletproof -- GC interactions can do strange things and I've argued for longer runs -- but it's pretty good as these things go. I took averages of multiple runs in part to hedge against these problems.

The comparison to jQuery is fair on the basis of syntax and market share. If you compare the syntax used for Dojo's tests with the jQuery versions, you'll see that they're similarly terse and provide analogous conveniences for DOM manipulation, but the Dojo versions lose the brevity race in a few places. That's the price of speed, and TaskSpeed makes those design decisions clear. As for market share, I'll let John do the talking. It would be foolish of me to suggest that we should be comparing Dojo to some other library without simultaneously suggesting that his market share numbers are wrong; and I doubt they are.

Given all of that, do the TaskSpeed numbers actually matter for application performance? I argue that they do for two reasons. First, TaskSpeed is explicitly designed to capture common-case web development tasks. You might argue that the weightings should be different (a discussion I'd like to see happen more openly), but it's much harder to argue that the tests do things that real applications don't. Because the toolkit teams contributed the test implementations, they provide a view to how developers should approach a task using a particular library. It's also reasonable to suspect that they demonstrate the fastest way in each library to accomplish each task. It's a benchmark, after all. This dynamic makes plain the tradeoffs between speed and convenience in API design, leaving you to make informed decisions based on the costs and benefits of convenience. The APIs, after all, are the mast your application will be lashed to.

I encourage you to go run the numbers for yourself, investigate each library's contributed tests to get a sense for the syntax that each encourages, and get involved in making the benchmarks and the libraries better. That's the approach that the Dojo team has taken, and one that continues to pay off for Dojo's users in the form of deliberately designed APIs and tremendous performance.

View-Source Is Good? Discuss.

I've been invited by Chris Messina and some kindly folks at MSFT to participate in a panel at this year's SxSW regarding the value and/or necessity of view-source, and so with apologies to my fellow panelists, I want to get the conversation started early.

First, my position: ceteris paribus, view-source was necessary (but not sufficient) to make HTML the dominant application platform of our times. I also hold that it is under attack — not least of all from within — and that losing view-source poses a significant danger to the overall health of the web.

That's a lot to hang on the shoulders of a relatively innocent design decision, and I don't mean to imply that any system that has a view-source like feature will become dominant. But I do argue that it helps, particularly when coupled with complementary features like reliable parsing, semantic-ish markup, and plain-text content. Perhaps it's moving the goal line a bit, but when I talk about the importance of view-source, I'm more often than not discussing these properties together.

To understand the importance of view-source, consider how people learn. Some evidence exists that even trained software engineers chose to work with copy-and-pasted example code. Participants in the linked study even expressed guilt over the copy-paste-tweak method of learning, but guilt didn't change the dynamic: a blank slate and abstract documentation doesn't facilitate learning nearly as well as poking at an example and feeling out the edges by doing. View-source provides a powerful catalyst to creating a culture of shared learning and learning-by-doing, which in turn helps formulate a mental model of the relationship between input and output faster. Web developers get started by taking some code, pasting it into a file, saving, loading it in a browser and hitting ctrl-r. Web developers switch between editor and browser between even the most minor changes. This is a stark contrast with technologies that impose a compilation step where the process of seeing what was done requires an intermediate step. In other words, immediacy of output helps build an understanding of how the system will behave, and ctrl-r becomes a seductive and productive way for developers to accelerate their learning in the copy-paste-tweak loop. The only required equipment is a text editor and a web browser, tools that are free and work together instantly. That is to say, there's no waiting between when you save the file to disk and when you can view the results. It's just a ctrl-r away.

With that hyper-productive workflow as the background, view-source helps turn the entire web into a giant learning lab, and one that's remarkably resilient to error and experimentation. See an interesting technique or layout? No one can tell you "no" to figuring out how it was done. Copy some of it, paste it into your document, and you'll get something out the other side. Browsers recovering from errors gracefully create a welcome learning environment, free of the inadequacy that a compile failure tends to evoke. You can see what went wrong as often as not. The evolutionary advantages of reliable parsing have helped to ensure that strict XML content comprises roughly none of the web, a decade after it was recognized as "better" by world+dog. Even the most sophisticated (or broken) content is inspectable at the layout level and tools like Firebug and the Web Inspector accelerate the copy-paste-tweak cycle by inspecting dynamic content and allowing live changes without reloads, even on pages you don't "own". The predicate to these incredibly powerful tools is the textual, interpreted nature of HTML. There's much more to say about this, but lets instead turn to the platform's relative weaknesses as a way of understanding how view-source is easily omitted from competing technologies.

The first, and most obvious, downside to the open-by-default nature of the web is that it encourages multiple renderers. Combined with the ambiguities of reliable parsing and semantics that leave room for interpretation, it's no wonder that web developers struggle through incompatibilities. In a world where individual users each need to be convinced to upgrade to the newest version of even a single renderer, differences only in version can wreak havoc in the development process. Things that work in one place may not look exactly the same in another. This is both a strength and a weakness for the platform, but at the level of sophisticated applications, it's squarely a liability. Next, ambiguities in interpretation and semantics mean that the project of creating tooling for the platform is significantly more complex. If only one viewer is prevalent (for whatever reason), then tools only need to consume and generate code that understands the constraints, quirks, and performance of a single runtime. Alternate forms of this simplification include only allowing code (not markup) so as to eliminate parsing ambiguity. The code-not-markup approach yields a potentially more flexible platform and one that can begin to execute content more quickly (as Flash does). These advantages, taken together, can create an incredibly productive environment for experts in the tools that generate content: no output ambiguity, better performance, and tools that can deliver true WYSIWYG authoring. These tools can sidestep the ctrl-r cycle entirely.

But wait, I hear you shout, It's possible to do code-only, toolable, full fidelity development in JavaScript! Tools like GWT and Cappuccino generate code that generates UI, ensuring that only those who can write code or have tools that can will participate; removing the potential value of view-source for those apps. But lets be honest: view source is nearly never locally beneficial. I can hardly count the number of times I've seen the "how do I hide my code?" question from a web n00b who (rightly or wrongly) imagines there's value in it. For GWT the fact that the output is an HTML DOM that's styled with CSS is as much annoyance as benefit. The big upside is that browsers are the dominant platform and you don't have to convince users to install some new runtime.

Similarly Flex, Laszlo, GWT's UI Binder, and Silverlight have discovered the value in markup as a simple declarative way for developers to understand the hierarchical relationships between components, but they correspond to completely unambiguous definitions of components they rely on compiled code — not reliably parsed markup — for final delivery of the UI. These tight contracts turn into an evolutionary straightjacket. Great if you're shipping compiled code down the wire that can meet the contract, but death if those tags and attributes are designed to live for long periods of time or across multiple implementations. You might be able to bolt view-source into the output, but it'll always be optional and ad-hoc, features that work against it being pervasive. Put another way, the markup versions of these systems are leaky abstractions on the precise, code-centric system that under-girds both the authoring and runtime environments. This code-centric bias is incredibly powerful for toolmakers and "real" developers, but it cuts out others entirely; namely those who won't "learn to program" or who want to build tools that inject content into the delivery format.

Whatever the strengths of code-based UI systems, they throw web crawlers for a loop. Today, most search engines deal best with text-based formats, and those search engines help make content more valuable in aggregate than it is on its own. Perhaps it's inevitable that crawlers and search engines will need to execute code in order to understand the value of content, but I remain unconvinced. As a thought experiment, consider a web constructed entirely of Flash content. Given that Flash bytecode lacks a standard, semantic way to denote a relationship between bits of Flash content, what parts of the web wouldn't have been built? What bits of your work would you do differently? What would the process be? There's an alternate path forward that suggests that we can upgrade the coarse semantics of the web to deal with ever-more-sophisticated content requirements. Or put another way, use the features of today's toolkits and code generators as a TODO list for markup driven features. But the jury is still out on the viability that approach; the same dynamic that makes multiple renderers possible ensures that getting them to move in a coordinated way is much harder than the unilateral feature roadmap that plugin vendors enjoy. HTML 5 and CSS 3 work is restarting those efforts, but only time will tell if we can put down the code and pick markup back up as a means to express ourselves.

I've glossed over a lot of details here, and I haven't discussed implications for the server side of a non-text format as our lingua-franca, nor have I dug into the evolution metaphor. Many of the arguments are likewise conditional on economic assumptions. There's lots of discussion yet to have, so if you've got links to concrete research in either direction or have an experience that bears on the debate post in the comments! Hopefully my fellow panelists will respond in blog form and I'll update this post when they do.

Older Posts

Newer Posts