All the Chromium news that I care about is now being aggregated at Planet Chromium, joining the similarly awesome Planet Webkit.
If we assume that the furthest down the code-centric path we'll go are the Dojo and JQuery style augmentations of existing content, then a simple de-obsfucator is sufficient. But I'm afraid the current generation of high-end sites and apps points in a different direction, one that is less hopeful, and one that implies to a greater extent that the browsers must lead the way out of the wilderness by creating new tags and CSS properties to help re-democratize the process of creating applications. We've already seen the alternatives, and while they may be elegant, they lack leverage and the second-order beneficial effects that have made the web as successful as it is today.
If HTML is just another bytecode container and rendering runtime, we'll have lost part of what made the web special, and I'm afraid HTML will lose to other formats by willingly giving up its differentiators and playing on their turf. Who knows, it might turn out well, but it's not a vision of the web that holds my interest.
I first attended SxSWi amidst the rubble of the dot-com crash, a time when the interactive festival filled only one hallway of the third floor of the Austin Convention Center. It's changed a lot since then, mostly in scale.
The lack of technical content is something I've bemoaned in years past but have finally come to accept. I was grateful to be on an excellent panel this year that touched on some topics that I both think and care a lot about. Our panel was also blessed with amazing audience engagement from people I respect. I chalk most of that up to Michael Lucaccini and Chris Bernard's excellent prep and panelist selection. Any panel with Chris Wilson and Aza Raskin on it is going to be good.
The explosion of SxSWi has not been a good thing and I went in the hopes that contraction had started as the economic disaster crimps budgets. Guess not. SxSWi was bigger than either the music or movie portions of the conference for the first time this year. Others have commented insightfully on the problems of scale, so I'll spare you the rundown of what makes an enormous conference uninviting, but suffice to say it seems like SxSWi has gone over some crucial limit and will continue its inexorable expansion until something gives in a dramatic way. Gravity cannot be reasoned with.
Unlike some of those who found themselves post-hoc disappointed, I really didn't think there was much chance of having a good time. Luckily I was wrong -- not so much because it suddenly got better than in '08, the last time I went -- but because I had learned how to cope better with the scale. My brother lives near Austin and getting to hang out with him made the entire experience better. I also employed a series of strategies that helped me have an experience that I'd gladly repeat:
- Aggressive expectation adjustment. The most technical content I saw was at the Google Hackathon fearlessly organized by Chris Schalk and generously attended by some amazing fellow Goolers. My expectations for every other session were set to "entertainment", and a few delivered.
- Bail early. One key for me to avoiding a bad experience was to be totally unafraid to walk out of talks and panels that weren't going anywhere.
- Find the people you want to hang out with and stick with them. SxSWi has gotten so large that the process of meeting new people except by introduction is fraught with apprehension. To combat this, I used old skool technology to find and keep up with the people I really wanted to have conversations with. Calling and texting directly in lieu of Twitter really worked. When a conference resembles the real world in scope, simply employ the same aggressive filtering you would in person. Problem solved.
- If there's a line, it's not worth it. Once you decide to aggressively filter your experience through the lens of people you already like, it becomes much easier to skip all of the official parties and enormous impersonal events. Ignoring the nagging sense of loss about "missing something" and focusing on having quality conversations with people you enjoy makes all the difference. It doesn't get better than that.
- Get thyself to the Salt Lick. Nothing makes me happy quite like Texas-style BBQ, so a key to SxSW for me is a pilgrimage to the Salt Lick. This time I got to go with family. Double win.
I think all of this implies that folks who haven't been to SxSWi before aren't going to be able to have the same sort of open, trusting experience that I had when I first started attending, and that's a real loss; but at least I now feel like I can go and have a good and productive time. I'm grateful to have gone this year and I'm looking forward to next year already.
Its been a rough year (or two) in the tech industry, and conference budgets aren't what they once were. Dustin Machi's doing his bit to keep the Dojo community connected by starting a fully virtual set of conferences, the first of which is 3 days full of Dojo goodness -- dojo.connect.
I'll be there virtually and I hope you can join us. The lineup is spectacular, and I can't think of a more concentrated way to get in touch with the community short of becoming a committer.
I gathered the numbers and stand behind them, so let me quickly outline where they come from, why they're fair, and why they matter to your app.
I took the average of three separate runs of the TaskSpeed benchmark in comparing the latest versions of both Dojo and jQuery. The numbers were collected on isolated VM's on a system doing little else. You may not be able to reproduce the exact numbers, but across a similar set of runs, the relative timings should be representative.
So why is TaskSpeed a fair measuring stick? First, it does representative tasks and the runtime harness is calibrated to ensure statistically significant results. Secondly, the versions of the code for each library are written by the library authors themselves. The Dojo team contributed the Dojo versions of the baseline tasks and the jQuery team contributed theirs. If any library wants to take issue with the tests or the results, they only need to send Pete a patch. Lastly, the tests run in relative isolation in iframes. This isn't bulletproof -- GC interactions can do strange things and I've argued for longer runs -- but it's pretty good as these things go. I took averages of multiple runs in part to hedge against these problems.
The comparison to jQuery is fair on the basis of syntax and market share. If you compare the syntax used for Dojo's tests with the jQuery versions, you'll see that they're similarly terse and provide analogous conveniences for DOM manipulation, but the Dojo versions lose the brevity race in a few places. That's the price of speed, and TaskSpeed makes those design decisions clear. As for market share, I'll let John do the talking. It would be foolish of me to suggest that we should be comparing Dojo to some other library without simultaneously suggesting that his market share numbers are wrong; and I doubt they are.
Given all of that, do the TaskSpeed numbers actually matter for application performance? I argue that they do for two reasons. First, TaskSpeed is explicitly designed to capture common-case web development tasks. You might argue that the weightings should be different (a discussion I'd like to see happen more openly), but it's much harder to argue that the tests do things that real applications don't. Because the toolkit teams contributed the test implementations, they provide a view to how developers should approach a task using a particular library. It's also reasonable to suspect that they demonstrate the fastest way in each library to accomplish each task. It's a benchmark, after all. This dynamic makes plain the tradeoffs between speed and convenience in API design, leaving you to make informed decisions based on the costs and benefits of convenience. The APIs, after all, are the mast your application will be lashed to.
I encourage you to go run the numbers for yourself, investigate each library's contributed tests to get a sense for the syntax that each encourages, and get involved in making the benchmarks and the libraries better. That's the approach that the Dojo team has taken, and one that continues to pay off for Dojo's users in the form of deliberately designed APIs and tremendous performance.