Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Name Soup

There still seems to be an amazing amount of FUD going around regarding the Harmony announcement. There is clearly a very different perspective from those who have been sitting inside the WG for the past year (as Kris Zyp and I have been lucky to). Inside the WG, the change seems a welcome way to break a logjam of reasonably held opinions of people who are all acting in good faith. From the outside, it all looks like confusion and game-playing.

One of the things, though, that keeps getting me frustrated as I read the "coverage" is that the names people use are confused. Probably because the names are confusing. Here's a quick glossary:

ECMAScript 3
Aka: "JavaScript", "ES3", "ECMAScript 262-3", and "JScript".

The current JavaScript that every browser implements (more or less). This is the current ratified standard and represents the 3rd edition of the ECMAScript spec. It is very old. Nothing else in this list is (yet) a ratified standard of any sort.

ECMAScript 4
Aka: "ES4", "JavaScript 2"

A new language which was to be mostly backwards compatible but add optional (gradual) typing and class-based inheritance. Based loosely on Adobe's ActionScript 3. This is the language effort which died as a result of Harmony.

ECMAScript 3.1
Aka: "ES3.1"

A set of small additions to ES3. Working drafts are available and will likely go to the standards process with few changes. Planning for this edition was started at Microsoft and Yahoo's behest late last year, causing the split in the working group which has been healed by the Harmony announcement.

ActionScript 3
Aka: "AS3"

Adobe's current JavaScript-like language, only with many features lifted from languages like Java which also enforce types and class-based semantics. This was the starting point for much of the work which became known as ES4.

A JIT-ing byte-code virtual machine (VM) which is at the core of the Flash Player and was donated by Adobe to the Mozilla Foundation. This is the VM that runs ActionScript 3 code today but will likely run "real" JavaScript for Mozilla in the future. It is not a full implementation of ES3 or ES4, but instead implements its own byte-code and needs to be wedded to a "front end" (like the ActionScript 3 compiler from Adobe) in order to be usable by programmers.
A VM which implements the same byte-code language as Tamarin (known as "ABC") but which is designed for use in mobile devices and other scenarios where code size and VM footprint are important. It implements trace-tree JIT-ing as a way to speed up hot-spots. Also donated to Mozilla by Adobe.
The name of the ECMA technical committee which is chartered to evolve the JavaScript language.
A new code-name for a language which is to come after ES3.1. It will feature many of the things ES4 was trying to accomplish, but may attempt them from different directions and will focus much more on incremental, step-wise evolution of the language.
JavaScript 2
A now-defunct name. This name was originally given to Waldemar Horwat's first proposal at a large-scale evolution of the JavaScript language in 1999. That effort did not succeed (although Microsoft implemented some of it in JScript.NET) and subsequent work via the current TC39 charter to build ES4 has sometimes been given the name "JavaScript 2", but it never really stuck. Not a name that describes any ratified standard or current proposal.
The formalized name of the JavaScript language. Since Sun Microsystems owns the name JavaScript and has no idea what to do with the trademark (but has been benevolent thus far), the ECMA committee which standardized the language was forced to adopt a different name.

Harmony Fallout

There's a lot of weirdness going on around the Harmony announcement. This post in particular tries to dig into some of the wrangling that caused the ES4/3.1 split and what the Oslo resolution "means", but I'm afraid that much of the analysis is being done without the benefit of an inside view of the WG process.

At the risk of talking too much out of school, I want to set the record straight in some ways. First, let me set some facts out:

So, lets pop up and talk about strategy for a minute. Fundamentally, very little has changed in terms of available strategic options for any of the players:

What died here wasn't Adobe's attempts to "own" a spec. If there were such hopes in play, they had been quietly put down one rational, backwards compatible decision after another in the year preceding the Oslo meeting. What died was an assumption that the web can evolve without implementations being out in front of the spec. AS3 was one implementation of a JavaScript-like language that might have been a contender for crown of "the next one", but so was JScript.NET. That neither of them "won" simply because they had been built in good faith is as true a sign as I can find that the process is working. Adobe gets it. Lets end the silly meme that "Adobe lost" or that "Microsoft won". The game has hardly begun and it won't be settled in a standards body anyway. What matters – and what we all need to keep our eyes keenly trained on – is what the big implementations do in the way of compatibility, performance, and feature set once ES3.1 arrives.

Thoughts on Harmony

So the announcement about "Harmony" is up over at Ajaxian. Long story short: this is really good news. I won't get into the background on this since at this point it doesn't matter and much of it is embargoed behind ECMA rules anyway, but here are the key points from my perspective:

Commenting on how things will evolve from here is absolutely one really knows. What's exciting, though, is that the important implementations of the existing language and their developer constituency seem to be in the drivers seat and progress is now predicated on solving their pressing issues. If a language like ES4 is going to evolve and be successful it will now need to prove that it can grow a serious constituency of its own before it is given permission to throw the rest of the web under the bus.

Thank goodness.

Chandler 1.0!

Somehow I missed this last week, but OSAF's Chandler super-PIM just went 1.0. It's been a long time in coming, and the result isn't what I had at all expected. Instead of being an "email client++", Chandler 1.0 is a calendar and task management tool that happens to be super-savvy about talking to your existing IMAP folders and lets you share and coordinate via CalDAV. This is fundamentally different from Things in that it also has enough of the "guts" of a real PIM to allow scheduling and coordination on tasks to be an integral part of the experience. Fundamentally it's "us" oriented and not "me" oriented and I'm excited to see what organizations use it for and what kinds of organizations discover its value.

The Chandler Hub also strikes me as a gem hidden in plain sight. Not only is it a great way to share parts of your schedule with others, it's an amazingly complete Dojo-based, Open Source UI for getting it all done, too. You can run your own Cosmo server (the code that runs Chandler Hub) inside of your department or organization but more than that, you can get the source. If you know Java (or employ someone that does), the Cosmo server is perhaps the easiest-to-hack-on option for an organization needing a flexible, lightweight task and team management option. Given that every organization I've ever worked for has struggled with exactly this type of coordination, the availability of source code here is probably going to beget some amazing integrations with bug trackers and the other project and task management systems already rely on. In some ways, despite being almost completely different in scope, Chandler Hub and Plaxo's kick-butt online features are both brining a level of visibility to different types of activities that cry out for better and deeper integrations with the tools that get used every day to "do the work" or track it in other ways. A few lightweight bridges to MS Project and/or Trac/Redmine would make Cosmo jet fuel for team visibility. I can't wait.

The Chandler team also told me last week that they're hard at work on a re-architecture of their python-based desktop client in order improve the performance and startup time and to make the whole system more hackable. Given that the desktop and web clients can speak to the same Cosmo server back-end (which can federate data out to lots of other places to boot), this seems like a promising path forward as the team completes a transition to a more traditional OSS distributed-development approach. Truth be told, I probably won't give up Thinks for Chandler desktop until performance does improve, but I'm sure gonna be tying my calendars together with Jennifer's via Chandler Hub ASAP.

Congrats again to the Chandler (and Cosmo and Hub) team(s)!

CSS Variables Are The Future

or: "Reports of the Harm Caused By CSS Variables Are Greatly Exaggerated"

To say that CSS is abominable isn't controversial. The implementations are leading the spec in some places, and we're getting real progress there. Firefox's rounded corners and WebKit's drop-shadows, declarative animations, background tiling, and CSS variables are all hugely important and liberating. But where the spec is in-front of the important implementations...well, I've ranted before on the topic. CSS sucks, and the editor of the spec has now written at length of his intent to keep it that way (via Simon). His arguments are flim-flam, but just saying so isn't enough to convince any one. Making the case requires answering long-hand and showing our work.

By The Numbers

Lets look first at the numbers presented in the sidebar here. Remember that the survey numbers come from documents on the W3C website. The article would have us believe that this sample set bears some relationship to the rest of the web such that we can extrapolate a case against CSS variables out of them. The relationship is tenuous enough that this disclaimer is included:

The authors who write on this Web site site are probably more careful about the structure and re-usability (and download speed) of their documents than the average Web author and there are indications that the average size of style sheets on the Web is two or three times higher. That is still small for computer code, though. Although it doesn't fit in one window, like the average W3C style sheet, it doesn't take more than scrolling once or twice either.

There's much wrong with this leap of logic: if the real web is at least 2x more complicated, how can we dismiss the clamors of real web developers for more powerful tools? Further more, what's to say that what is or isn't being encoded in CSS is due to to complexity? There are lots of things which we'd like to put in CSS but don't because CSS just can't do many of the things we should expect of it. Real-world CSS is likely to get longer, not shorter, as CSS evolves toward its manifest destiny and allows us to declare property bindings, animations, and all manner of complex layouts for which we currently turn to table elements and layout systems like the Dojo BorderContainer and ExpandoPane widgets. This isn't a foot-note, it's an out-and-out refutation of the proffered case.

We can also do much better than the chosen sample set. A quick wget of the front pages of the top 20 Alexa sites (to stop short of the porn) reveals a world which the article's sample set bears no resemblance to. Remember, these are only the front pages, as well. Internal pages can be significantly more complex as they trend toward applications and away from relatively static views of data. Here's what I ran to get data to work with:

media:css_stats alex$ ls
./		../		out/		sites.txt
media:css_stats alex$ cat sites.txt
media:css_stats alex$ wget --user-agent="..." -P out -l1 -p -H -i sites.txt
           => `out/'
Connecting to||:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: [following]

Of these 20 pages, there are only 28 referenced external style sheets (per wget), likely to speed rendering time. Most of the pages that do specify external style sheets also host them on "static" servers to increase download parallelism and resource usage on front-end boxes. We can already say something about the use of CSS is large, commercially successful websites:

Site authors are going to enormous lengths to ensure that pages load as fast. Therefore, techniques which reduce redundancy and therefore result in smaller style sheets will have significant value to content authors.

Recall that one of the primary drivers behind CSS inheritance, macros, and variables is to reduce the size of style sheets. Just looking at the behavior of real-world page authors at least makes a strong case for techniques to create terse style sheets if not for these specific solutions.

These external style sheets have a median line count of 280, seemingly validating the essay's assertion that the "real world" is 2x to 3x more complicated. But something tells me that's also misleading. Indeed:

media:css_stats alex$ cd out
media:out alex$ find . | grep "\.css" > css_files.out
media:out alex$ cat `cat css_files.out` | wc -l
media:out alex$ CSS_FILES=`cat css_files.out`
media:out alex$ for f in $CSS_FILES; do wc -l $f; done
      92 ./
     419 ./
      59 ./
     138 ./
      32 ./
     122 ./
     128 ./
     310 ./
      86 ./
      58 ./
     332 ./
     150 ./
      10 ./
    2081 ./
       0 ./
       0 ./
       0 ./
       0 ./
       0 ./
       0 ./
     377 ./
     346 ./
     192 ./
     853 ./
     386 ./
    1428 ./
     256 ./
      11 ./

But what about those 0-line files? ls says there's more there:

 8.8K  ./
 4.4K  ./
 2.3K  ./
 8.8K  ./
  34K  ./
 1.6K  ./

Indeed, they're all 1-line long and missing a trailing newline char due to the whitespace removal that's been applied to them. The shortest of the files, when expanded for readability, is longer than 100 lines. Clearly counting lines doesn't actually tell us much about the complexity of production-quality CSS – at least not without some normalization. Since most production-level CSS is embedded in the served document then, we should probably have a look at it too and figure out ways to normalize the whole shooting match to determine some sort of "style complexity factor" since it's very much the case that the impact of CSS is not often isolated to individual elements. Indeed, some of the hardest to maintain issues with CSS come from the overall difficulty of knowing what's affecting which element and those rules can come from anywhere. So getting an accurate view of the amount of style that a developer or designer needs to keep in their head at once (the central argument of the original piece) is usually some factor of the total number and applicability of the rules in the page to the elements currently being styled. Therefore, to get a sense of the complexity of the style being applied to a page, it would be far better to know the number of normalized lines of CSS on the page plus a count of the total number of rules applied to the document.

I put together a small python script to do just this. Quickly summarized, it sweeps through documents looking for external stylesheet links, @import statements, or <style> tags, parsing the contents of each into a normalized, pretty-printed form. Here's the summary output:


files examined: 17
total # of CSS rules: 9462
normalized lines of CSS: 39999
average # of CSS lines per file: 2352 (mean) 1428 (median)
average # of CSS rules per file: 556 (mean) 308 (median)
average # of CSS style sheets per file: 4 (mean) 3 (median)

(full output here)

This output comes after removing items from the list which don't have valid index pages (e.g. got wise to my faked user agent) and which included some manually fetched external CSS files that wget initially missed. Also, it's worth noting that the Google home-page pulls the averages down significantly since they only include 111 lines of normalized CSS per and the home pages for,, and are seemingly identical from this perspective and all occur in the top 20.

Never the less, the results are still astounding when compared with the results from the original article. CSS authors who maintain the world's most popular landing pages are contending with thousands of lines of CSS per page and hundreds of rules. This is orders of magnitude more complexity than the initially presented numbers, hopefully dispelling any notion that we could rely on those numbers as the basis for asserting anything but that those in the employ of the W3C and the volunteers which join them are not typical content authors and do not attempt the feats of CSS which are meerly work-a-day constructions for commercially successful websites. Granted, the home-pages of the most successful websites on the internet are also not "normal" (by definition), but they are significantly more representative of where the web is heading and the standards that content authors likely aspire to.

The article's numbers may indeed present a compelling case for not adding CSS variables for the sake of the W3C's content authors, but no such case holds in the "real world" of deployed content.

Upon Further Inspection...

One argument made in the article is that the ability to understand what you see in a style sheet is bolstered by a lack of indirection. This argument is simplistic insofar as CSS is already rife with indirections. From cascades to the order of precedence in style application to the !important rules to media queries, CSS currently provides many facilities for content authors to create difficult situations in determining where a particular rule is coming from or what the impact of a rule will be. Inspectability and what-you-see-is-what-you-get were properties of an earlier, simpler web which the article harkens back to. But the WYSIWYG principle has already been lost as HTML and CSS have failed to keep pace with the tasks being demanded of them. When a pile of non-semantic div or table elements are employed to create the canonical 3-column layout for which the CSS is a mind-bending combination of art and science, no novice can be expected to follow along at home. I whole-heartedly agree that the ability to "View Source" on the web and have that mean something is a powerful evolutionary advantage to the Open Web, but it is one which is being under-cut most forcefully by the lack of evolution in HTML and CSS, not by the addition of features to them. While HTML and CSS lack semantics for simple construction of common visual and structural idioms, we should continue to expect the contorted, complex sets of rules and markup. Visual and interaction designers aren't demanding less of the user experience simply because CSS isn't up to the task. Instead, they're turning to JavaScript toolkits like Dojo which can and do deliver the goods. Hardly a better position for the platform to compete from.

On this point the essay also contains a rhetorical bait-and-switch which I find distasteful: it dismisses variables because they don't inherently do anything to reduce the lengths of pages (true!) and then argues against macros and inheritance because they create levels of indirection which can be confusing. Inheritance and macro definitions can play a key role in drastically reducing the length of style sheets. In this way, they promote understanding through exactly the same "memory effect" mechanism that is cited as a liability when discussing variables.

Variables, on the other hand, provide an effective and over-due mechanism for consolidating the definition of shared values across style sheets which may be defined in distributed places (say, via a CMS's default template which is later customized by users). For the very-lengthy, real-world styles which occur frequently on the public internet, this ability to cleanly separate the definition of common values into a single style sheet would prove a huge boon to the development and maintenance of sites for which large teams must cooperate on the generation of what ends up being a single page. Style sheets are already long, and the proponents of variables assume this to be true. That variables do not shorten style sheets is not a valid argument against the considerable good that they can do in ensuring that style sheets are maintainable.

The essay dismisses the idea that variable names are (and should be) self-documenting. The argument that a comment would somehow "be better" ignores the reality of todays large style sheets. There isn't a way for rules to effect similar visual appearance on different properties without repetition today, leading to tremendous maintenance headaches.

Modern style sheets are already well beyond the complexity levels which allow us to fit them all in a single screen, and re-using values today requires the exact same looking-in-multiple-places-burden that the essay deems unacceptable in a future with macros. The results for this extra effort today, however, aren't consistent and maintainable rules which can be changed with relatively few updates. Instead we're left with a mish-mash of hard-coded values which are copied here and there, often across multiple style sheets. Adding multiple classes to a single node is also no panacea as it quickly devolves into tens or hundreds of small rules to define what are essentially parameterized constants for a single set of layout, color, or typography decision combinations. These decisions cannot be conveyed through a simple selector but must instead by applied in the right combinations throughout a document directly to elements which required them. Ugg. Even if the original article were to make a cogent case against the need for variables, that can't be extended to a case against inheritance or macro capabilities. Composition is far too difficult and authors are already awash in complexity. Denying them effective, optional tools to deal with this complexity is simply to deny the truth of the web which has evolved.

Cobbling It Together

Perhaps the most disingenuous argument fielded is that the addition of variables in CSS places an onerous burden on the developers of user agents. User agent developers are best suited to know the difficulties and pitfalls in implementing CSS variables and at least one team has decided that not only is it workable, they have authored the spec now under discussion and have implemented several different syntaxes for the feature in parallel in order to figure out what will work best. Were there hue-and-cry from other implementers, I'd be much more sympathetic to this point. However, given the general lack of objection amongst implementers, the long-standing ambiguities in the CSS specifications, the inscrutable choices of box models, and the weirdisms of the CSS in general it seems that we're very far down the path in terms of the complexity required of any implementor. It is probably the case today that authoring a new HTML and CSS rendering engine that will consume the real web isn't a realistic prospect save but for the most well-heeled and motivated of teams today. Adding or not adding CSS variables and/or macros doesn't change that reality.

The arguments against variables and macros/inheritance get weakest when they are taken as a whole. Variables are likely just the first step to a CSS that allows both simple parameterization (variables) and composition (inheritance, macros, etc.). One without the other is weak sauce, and the essay tacitly acknowledges as much by arguing against them in turn (but not in together). A CSS dialect which includes inheritance will allow the specialization of "parent" rules without relying on extra nodes or multiple classes added directly to nodes. CSS with variables will allow for much simpler maintenance and "templating" of complex visual identities. Taken together, these techniques allow for sophisticated CSS authors to stop repeating themselves and get a handle on the thousands of lines of code which they're managing to construct pages today.

Arguing that they are too hard or too confusing simply ignores the deeply painful experience of today's content authoring process. The time has long passed when we can delay progress or claim it "harmful" without proof. Once again, it's time to let the implementations lead and time for the standards bodies to stand aside and cheer them on.

Older Posts

Newer Posts