This is why I get so exercised about WebIDL and the way it breaks the mental model of JS's "it's just extensible objects and callable functions". It's also why my discussions with folks at last year's TPAC were so bleakly depressing. I've been meaning to write about TPAC ever since it happened, but the time and context never presented themselves. Now that I got some of my words out about layering in the platform, the time seems right.
Let me start by trying to present the argument I heard from multiple sources, most likely from (in my feeble memory)
Anne van Kestern Jonas Sicking(?):
ECMAScript is not fully self-describing. Chapter 8 drives a hole right through the semantics, allowing host objects to whatever they want and more to the point, there's no way in JS to describe e.g. list accessor semantics. You can't subclass an Array in JS meaningfully. JS doesn't follow it's own rules, so why should we? DOM is just host objects and all of DOM, therefore, is Chapter 8 territory.
Many of the Chapter 8 properties and operations are still in the realm of magic from JS today, and we're working to open more of them up over time by giving them API -- in particular I'm hopeful about Allen Wirfs-Brock's work on making array accessors something that we can treat as a protocol -- but it's magic that DOM is appealing to and even specifying itself in terms of. Put this in the back of your brain: DOM's authors have declared that they can and will do magic.
Ok, that's regrettable, but you can sort of understand where it comes from. Browsers are largely C/C++ enterprises and DOM started in most of the successful runtimes as an FFI call from JS to an underlying set of objects which are owned by C/C++. The truth of the document's state was not owned by the JS heap, meaning every API you expose is a conversation with a C++ object, not a call into a fellow JS traveler, and this has profound implications. While we have one type for strings in JS, your C++ side might have
std::string, and/or some variant of
JS, likewise, has
Number while C++ has
long long int...you get the idea. If you've got storage, C++ has about 12 names for it. Don't even get me started on
It's natural, then, for DOM to just make up it's own types so long as its raison d'être is to front for C++ and not to be a standard library for JS. Not because it's malicious, but because that's just what one does in C++. Can't count on a particular platform/endianness/compiler/stdlib? Macro that baby into submission. WTF, indeed.
This is the same dynamic that gives rise to the tussle over constructable constructors. To recap, there is no way in JS to create a function which cannot have
new on the left-hand-side. Yes, that might return something other than an instance of the function-object on the right-hand side. It might even throw an exception or do something entirely non-sensical, but because
What we're witnessing here isn't "right" or "wrong"-ness. It's entirely conflicting world views that wind up in tension because from the perspective of some implementations and all spec authors, the world looks like this:
Not to go all Jeff Foxworthy on you, but if this looks reasonable to you, you might be a browser developer. In this worldview, JS is just a growth protruding from the side of an otherwise healthy platform. But that's not how webdevs think of it. True or not, this is the mental model of someone scripting the browser:
The parser, DOM, and rendering system are browser-provided, but they're just JS libraries in some sense. With
<canvas>'s 2D and 3D contexts, we're even punching all the way up to the rendering stack with JS, and it gets ever-more awkward the more our implementations look like the first diagram and not the second.
To get from parser to DOM in the layered world, you have to describe your objects as JS objects. This is the disconnect. Today's spec hackers don't think of their task as the work of describing the imperative bits of the platform in the platform's imperative language. Instead, their mental model (when it includes JS at all) pushes it to the side as a mere consumer in an ecosystem that it is not a coherent part of. No wonder they're unwilling to deploy the magic they hold dear to help get to better platform layering; it's just not something that would ever occur to them.
Luckily, at least on the implementation side, this is changing. Mozilla's work on dom.js is but one of several projects looking to move the source of truth for the rendering system out of the C++ heap and into the JS heap. Practice is moving on. It's time for us to get our ritual lined up with the new reality.
Which brings me too David Flanagan who last fall asked to read my manifesto on how the platform should be structured. Here it is, then:
The network is our bottleneck and markup is our lingua-franca. To deny these facts is to design for failure. Because the network is our bottleneck, there is incredible power in growing the platform to cover our common use cases. To the extent possible, we should attempt to grow the platform through markup first, since markup provides the most value to the largest set of people and provides a coherent way to expose APIs via DOM.
Markup begets JS objects via a parser. DOM, therefore, is merely the largest built-in JS library.
Any place where you cannot draw a line from browser-provided behavior from a tag to the JS API which describes it is magical. The job of Web API designers is first to introduce new power through markup and second to banish magic, replacing it with understanding. There may continue to be things which exist outside of our understanding, but that is a challenge to be met by cataloging and describing them in our language, not an excuse for why we cannot or should not.
The ground below our feet is moving and alignment throughout the platform, while not inevitable, is clearly desirable and absolutely doable in a portable and interoperable way. Time, then, to start making Chapter 8 excuses in the service of being more idiomatic and better layered. Not less and worse.
Jetlag has me in its throes which is as good an excuse as any to share what has been keeping me up many nights over the past couple of years; a theory of the web as a platform.
I had a chance last week to share some of my thinking here to an unlikely audience at EclipseCon, a wonderful experience for which my thanks go to Mike Milinkovich and Ian Skerrett for being crazy enough to invite a "web guy" to give a talk.
One of the points I tried (and perhaps failed) to make in the talk was that in every platform that's truly a platform it's important to have a stable conceptual model of what's "down there". For Java that's not the language, it's the JVM. For the web...well...um. Yes, it bottoms out at C/C++, but that's mostly observable through spooky action at a distance. The expressive capacity of C/C++ show up as limitations and mismatches in web specs all the time, but the essential semantics -- C/C++ is just words in memory that you can do whatever you please with -- are safely hidden away behind APIs and declarative forms that are unfailingly high-level. Until they aren't. And you can forget about composition most of the time.
For a flavor of this, I always turn back to Joel Webber's question to me several years ago: why can't I over-ride the rendering of a border around an HTML element?
It's a fair question and one I wrote off too quickly the first time he posed it. We have
<canvas> which lets us draw lines however we like, so why can't we override the path painting for borders? Why isn't it just a method you implement like in Flex or Silverlight?
Put another way: there are some low level APIs in the web that suggest that such power should be in the hands of us mortals. When using a low-level thing, you pay-as-you-go since lower-level things need more code (latency and complexity)...but that's a choice. Today's web is often mysteriously devoid of the sort of sane layering, forcing you to re-build parallel systems to what's already in the browser to get a job done. You can't just subclass the right thing or plug into the right lifecycle method most of the time. Want a
<canvas>? Fine. There you go. Want a
<span>s coming up! But don't go getting any big ideas about using the drawing APIs from
<canvas> to render your
<span>. Both are magic in their own right and for no reason other than that's the way it has always been.
You can see why people who think in terms of VM's and machine words might find this a bit ahem limiting.
But how much should we "web people" care about what they think? After all, "real programmers" have been predicting the imminent death of this toy browser thing for so long that I'm forgetting exactly when the hate took its various turns through the 7 stages; "Applets will save us from this insanity!"..."Ajax is a hack"..."just put a compiler in front of it and treat it as the dumbest assembler ever" (which is at least acceptance, of a sort). The web continues to succeed in spite of all of of this. So why bother with the gnashing of teeth?
Thanks to Steve Souders, I have an answer: every year we're throwing more and more JS on top of the web, dooming our best intended semantic thoughts to suffocation in the Turing tar pit. Inexorably, and until we find a way to send less code down the wire, us is them, and more so every day.
Let that picture sink in: at 180KB of JS on average, script isn't some helper that gives meaning to pages in the breech, it is the meaning of the page. Dress it up all you like, but that's where this is going.
Don't think 180KB of JS is a lot? Remember, that's transfer size which accounts for gzipping, not total JS size. Oy. And in most cases that's more than 3x the size of the HTML being served (both for the page and for whatever iframes it embeds). And that's not all; it's worse for many sites which should know better. Check out those loading "filmstrip" views for gawker, techcrunch, and the NYT. You might be scrolling down, looking at the graphs, and thinking to yourself "looks like Flash is the big ticket item...", and while that's true in absolute terms, Flash isn't what's blocking page loads. JS is.
And what for? What's all that code doing, anyway?
It's there for three reasons: first, to clean up the messes that browser vendors aren't willing or able to clean up for themselves; second, to provide an API that becomes the new platform, and lastly to provide the app-specific stuff you are trying to get across. Only the last one is strictly valuable. You're not including JQuery, Backbone, Prototype or Dojo into your pages just because you like the API (if you are, stop it). You're doing it because the combination of API and even behavior across browsers makes them the bedrock. They are the new lisp of application construction; the common language upon which you and your small team can agree; just don't expect anyone else to be able to pick up your variant without squinting hard.
What would it mean to be able to subclass an HTML Element?
We observed that most of what the current libraries and frameworks are doing is just trying to create their own "widgets" and that most of these new UI controls had a semantic they'd like to describe in a pretty high-level way, an implementation for drawing the current state, and the need to parent other widgets or live in a hierarchy of widgets.
Heeeeeyyyyyy....wait a minute...that sounds a lot like what HTML does! And you even have HTML controls which generate extra elements for visual styling but which you can't access from script. This, BTW, is what you want when building your own controls. Think the bullets of list items or the sliders generated by
<input type="range">. There are even these handy (non-constructable!?!) constructors for the superclasses in JS already.
None of these are satisfying. Certainly not if what we want is a platform of the sort you might consider using "naked". And if your "platform" always needs the same shims here and polyfills there, let me be candid: it ain't no platform. It's some timber and bolts out of which you can make a nice weekend DIY project of building a real platform.
So we need to do better.
What does better look like?
Better is layered. Better is being able to just replace what you need, to plug in your own bits to a whole that supports that instead of making you re-create everything above any layer you want to shim something into. This is why mutable root prototypes in JS and object mutability in general are such cherished and loathed properties of the web. It is great power. It's just a pity we need it so often. Any plan for making things better that's predicated on telling people "oh, just go pile more of your own parallel systems on top of a platform that already does 90% of what you need but which won't open up the API for it" is DOOMED.
Thus began a archaeology project, one which has differed in scope and approach from most of the recently added web capabilities I can think of, not because it's high-level or low-level, but because it is layered. New high-level capabilities are added, but instead of then poking a hole nearly all the way down to C++ when we want a low-level thing, the approach is to look at the high-level thing and say:
How would we describe what it's doing at the next level down in an API that we could expose?
This is the reason low-level-only API proposals drive me nuts. New stuff in the platform tends to be driven by scenarios. You want to do a thing, that thing probably has some UI (probably browser provided), and might invoke something security sensitive. If you start designing at the lowest level, throwing a C++ API over the wall, you've turned off any opportunity or incentive to layer well. Just tell everyone to use the very fine JS API, after all. Why should anyone want more? (hint: graph above). Instead of opening doors, though, it's mostly burden. Everything you have to do from script is expensive and slow and prone to all sorts of visual and accessibility problems by default. If the browser can provide common UI and interaction for the scenario, isn't that better most of the time? Just imagine how much easier it would be to build an app if the initial cut at location information had been
<input type="location"> instead of the Geolocation API we have now. True, that input element would need lots of configuration flags and, eventually, a fine-grained API...if only there were a way to put an API onto an HTML element type...hrm...
In contrast, if we go declarative-only we get a lot of the web platform today. Fine at first but horrible to work with over time, prone to attracting API barnacles to fill perceived gaps, and never quite enough. The need for that API keeps coming back to haunt us. We're gonna need both sides, markup and imperative, sooner or later. A framework for thinking about what that might look like seems in order. Our adventure in excavation with Web Components has largely been a success, not because we're looking to "kernalize the browser" in JS -- good or bad, that's an idea with serious reality-hostile properties as soon as you add a network -- but because when you force yourself to think about what's already down there as an API designer, you start making connections, finding the bits that are latent in the platform and should be used to explain more of the high level things in terms of fewer, more powerful primitives at the next layer down. This isn't a manifesto for writing the whole world in JS; it's a reasonable and practical approach for how to succeed by starting high and working backwards from the 80% use-case to something that eventually has most of the flexibility and power that high-end users crave.
The concrete steps are:
- Introduce new platform capabilities with high-level, declarative forms. I.e., invent new tags and attributes. DOM gives you an API for free when you do it that way. Everyone's a winner.
- When the thing you want feels like something that's already "down there" somewhere, try to explain the bits that already exist in markup in terms of a lower-level JS or markup primitive. If you can't do that or you think your new API has no connection to markup, go back to step 1 and start again.
- When it feels like you're inventing new language primitives in DOM just to get around JS language limitations, extend the language, not the API
The trash truck just came by which means it's 6AM here in almost-sunny London. WordPress is likewise telling me that I'm dangerously out of column-inches, so I guess I'll see if I can't get a last hour or two of sleep before the weekend is truly over. The arguments here may not be well presented, and they are subtle, but layering matters. We don't have enough of it and when done well, it can be a powerful tool in ending the battle between imperative and declarative. I'll make the case some other time for why custom element names are a good idea, but consider it in the layered context: if I could subclass
Cognitive dissonance, ahoy! You're welcome ;-)
Note: this post has evolved in the several days since its initial posting, thanks largely to feedback from Annie Sullivan and Steve Souders. But it's not their fault. I promise.
This is just the top of the backlog...there's stuff still in my brain-bin from TPAC, but a couple of quick items worthy of a collective pip-pip!:
- MSFT Moves To More Aggressive IE Updates: Automatic updates work, and in conjunction with shorter development cycles they enable the process of progress to move faster. Kudos to MSFT and the IE team for taking a concrete step in the right direction.
- Jeff Jaffe got the message and, while continuing to say tone-deaf things like "the W3C is a volunteer organization", thereby ignoring the cause of the prefix squatter's discontent from the perspective of standards -- that there is a good faith way to act and that Apple has run afoul of it by not plumping specs for their features, and that those shipping common code have been burned by unwittingly shipping features which have been found to undermine the good faith of the process -- at least he has said that speeding up the process is a key item for action this year at the W3C. It's not much, but it deserves to be noted as a refreshing bit of honest self assesment.
As the over-heated CSS vendor prefix debate rages, I can't help but note the mounting pile of logical fallacies and downright poor reasoning being deployed. Some context is in order.
Your Moment Of Zen
The backdrop to this debate is that CSS is absolutely the worst, least productive part of the web platform. Apps teams at Google are fond of citing the meme that "CSS is good for documents, but not for apps". I push back on this, noting the myriad ways in which CSS is abysmal for documents. That isn't to minimize the pain it causes when building apps, it's just that the common wisdom is that CSS surely must be fantastic for somebody else. Once we find that person, I'll be sure to let you know. In the mean time we should contemplate how very, very far behind the web platform is in making it delightful to build the sorts of things that are work-a-day in native environments.
But it's worse than simply being under-powered: CSS has the weakest escape hatches to imperative code and demands the most world-view contortion to understand its various layout modes and their interactions. Imagining a more congruent system isn't hard -- there are many in the literature, might I humbly suggest that now might be a good time to read Badros & Borning? -- and even CSS could be repaired were we able to iterate quickly enough. Things have been moving faster lately, but fast enough to catch up with the yawning gap in platform capabilities? We'll come back to the speed/scale of change later.
For now, consider that the debate (as captured by Nate Vaughn) is about the retrospective implications of the few things that have already gotten better for some set of developers in some situations. That this sort of meaningful progress (corners, gradients, animations, transitions, flexing boxes, etc.) is rare makes the debate all the more bone-chilling to me. We finally got a few of the long list of things we've been needing for a decade or more, and now because the future is unevenly distributed, we're about to blow up the process that enabled even that modicum of progress? How is it we're extrapolating causality about engine usage from this unevenness, anyhow? None of this is obvious to me. The credible possibility of losing prefixes as a way to launch-and-iterate is mind-exploding when you realize that the salient competition isn't other browsers, it's other platforms. Either the proponents of squatting other vendor's prefixes haven't thought this through very hard or they're bad strategists on behalf of the web as a platform...not to mention their own products. The analysis that we're being asked to accept rests on an entire series of poor arguments. Lets start with the...
In an interview out yesterday with Eric Meyer, Tantek Çelik of Mozilla tried to present this debate as a question of barriers to the adoption of non-WebKit based browsers, specifically Firefox Mobile. Opera has made a similar case. What they ommit is that the only platforms where they can credibly ship such browsers are Android and S60 (a dead end). That's a large (and growing) chunk of the world's handsets -- good news for me, as I now work on Chrome for Android here in London -- but for whatever reason it appears that iOS users surf a whole lot more.
Let that sink in: on the devices that are the source of most mobile web traffic today, it's not even possible to install a browser based on a different engine, at least not without a proxy architecture like the one used in the excellent Opera Mini or Amazon's Silk. iOS and Windows Phone are both locked-down platforms that come with only one choice of engine (if not browser shell). When folks from the vendors who want to appropriate others' prefixes talk about "not being able to compete", remember that competition isn't even an option for the most active users of mobile browsers. And it's prefixes that are keeping us down? We must go deeper.
The tension comes into focus when we talk in terms of conversion, retention, and attrition. Conversions are users who, if able, switch to a new product from an old one. Retention is a measure of how many of those users continue to use the product after some period of time. Today (and since Windows first included a browser), the single largest factor in the conversion of users to new browsers is distribution with an OS. This is the entire rationale behind the EU's browser choice UI, mandated on first start of new Windows installs. Attrition is the rate at which users stop choosing to use your product day after day, and for most desktop installed software, attrition is shockingly high after 3 to 6 months. The attrition rate is usually measured by cohorts over time; users who installed on the same day/week/month to measure what % of that group continue to use the product over increasingly long periods of time. The rate of decay falls, but the overall attrition invariably continues to rise. You might not get un-installed, but that doesn't mean you'll still be the go-to icon on the user's home screen or desktop. Eventually every device is recycled, wiped, or left for history in a drafty warehouse and along with it, your software. A key factor in getting attrition under control for Chrome has been evangelism to improve site compatibility, e.g. "I'm not using your browser because my bank website doesn't work with it". That argument -- that site compatibility is key to ensuring a competitive landscape for what otherwise are substitutes -- puts the entire thing in some perspective. Attrition isn't the same thing as conversion, and conversion is driven primarily by integrations and advertising. Implicit in the arguments by Tantek and others is a sub-argument of the form:
Our product would have measurably more users if sites were more compatible.
Thanks to what we know about what drives conversions, in the short run this is simply false. Long term, what invariably gives you more users is starting with more users. The set of things that are most effective at convincing users to swap browsers, even for a day, include: advertising, word-of-mouth, a superior product, distribution/bundling, and developer advocacy. Depressingly, only one of those involves actually being a better product, and the prerequisite for all of them is the ability to switch (thanks, Android team!). There's a similar dynamic at work when doing advocacy to web developers: if you're nowhere in their browser stats, they're adding support for a standard or worse a second prefix in order to do service to a cause, not because it's measurably good for them. Clearly, that's going to be somewhat less effective. Where, then, is the multi-million dollar advertising campaign for Fennec? The carrier and OEM rev-share deals for bundling on new Android phones? Hmm. To hear Tantek et. al. tell it, non-WebKit-based browsers would be prevalent on mobile if only it weren't for those damned browser prefixes causing users of other browsers to receive different experiences! Also, those kids and that damned dog.
Over the long haul compatibility can have a dramatic effect on the rate of attrition by changing the slope of the curve -- which, remember, is a rate with decay and not a single % -- but it begs the next uncomfortable question: what do we mean by "compatibility" here? What sorts of incompatibility cause attrition? Is it content that looks slightly worse but still essentially works (think grey PNG backgrounds on IE6) or does it simply turn you away, not allow you to play in any way, and generally fails (think the ActiveX dependencies of yore)?
Inaccessible or Ugly?
Eric was good enough to call out what I view as a key point in this debate: what sort of "broken" are we talking about? Tantek responded with a link to side-by-side screenshots of various properties rendered on Chrome for Android's Beta and current Fennec. In some of these cases we may be looking at Fennec bugs. Wordpress.com serves the same content to Fennec which seems to bork what
float: left; means. That, or some media query is preventing the main blocks from being floated; it's hard to tell which from a quick
view-source:. For the versions of google.* included in the list, the front end is simply serving the desktop version to Fennec which makes the wonky button rendering even stranger. Is there room to improve what gets sent to Fennec? You bet, but that's not what's being argued in the main. Ask yourself this: is what you see on that page worth destroying the prefix system for? 'Cause that's what the advocates of prefix-squatting would have you believe. In effect, they're suggesting that nothing will cause developers to test on non-pervasive engines, a deeply fascinating assertion. Even if we accept it, it doesn't point clearly to a single tactic to resolve the tension. It certainly doesn't argue most strongly for prefix-squatting.
An important point Eric failed to follow up on was Tantek's assertion that Mozilla will be cloaking user-agent strings. Does he imagine that the only thing that might be cause someone to send different content is CSS support? API support for things like touch events differs, the performance characteristics of device classes and browsers vary wildly, and application developers are keen to deliver known-good, focused experiences. The endless saga of
position: fixed; as plumbed by Google teams and others is a story of competing constraints: browser vendors optimize for content, content fights back. Repeat. What does Mozilla imagine is going to happen here? Maintained content will react to the browser usage of end-users (and as we've covered, compat != conversions). Unmaintained content, well, that's what fallback is all about. And bad things deserve to lose. Assuming that your browser is 100% compatible with developer expectations and testing if you only switch the UA and implement a few prefixes is NPAPI-level crazy all over again, and it's entirely avoidable. Tantek and Brendan, of all people, should be able to reason that far. I guess they'll find out soon enough -- although we will have all been made poorer for it.
Now, what about the related argument that Mozilla & Co. are only going to be doing this for properties which are "stable" (nevermind their actual standardization status)? The argument says that because something hasn't changed in another vendor's prefixed version in a while, it must be safe to squat on. Not only is this (again) incredibly short-sighted, it says that instead of forcing a standard over the line and clobbering both developers and other vendors with the dreaded label of "proprietary" (the usual and effective tactic in this situation), they're instead willing to claim ownership and therefore blame for the spread of this soon-to-be-proprietary stuff, all the while punting on having an independent opinion about how the features should be structured and giving up on the standards process...and all for what?
Product vs. Platform
Perhaps there wasn't space in Tantek's interview with Eric, but both of them chose not to be introspective about the causes of WebKits use in so many mobile browsers, with Tantek merely flagging the use of a single engine by multiple products as "a warning sign." But a warning of what, exactly? Eric didn't challenge him on this point, but I sorely wish he had. Why did Safari, the Android Browser, Chrome, Silk, Black Berry, and many others all pick WebKit as the basis for their mobile browsers?
WebKit isn't a browser. It's just not. To make a browser based on WebKit, one might bring along at least the following bits of infrastructure which WebKit treats as bits to be plugged in:
- Caches of some sort
- Graphics rendering
- A build system
- POSIX or other platform plumbing
What we're witnessing isn't open vs. closed, it's differences in initial cost of adoption. In JS terms, it's jQuery (focused core, plugins for everything else) vs. Sencha or Dojo (kitchen sink). Entirely different target users, and both will find their place. Nobody should be shocked to see smaller, more focused bits of code with good plugin characteristics spreading as the basis for new projects. The Mozilla Foundation wants to help prevent monoculture? In addition to making the Firefox product a success, there are concrete engineering things they can do to make Gecko more attractive to the next embedder, Firefox-branded or not. I haven't heard of progress or prioritization along those lines, but I'm out of the loop. Perhaps such an effort is underway, if so, I applaud it. Whatever the future for Gecko, Product success isn't related to platform success as a first approximation. Having a good, portable, pluggable core increases the odds of success in evolutionary terms, but it's absolutely not determinant; see MSIE.
Speaking of IE...I respect those guys a lot, but the logical leap they're asking us to swallow is that the reason people return Windows Mobile phones is that some CSS doesn't work. That's what attrition means on a platform where they're the only native runtime. Data would change my mind, but it's a hell of a lot to accept without proof.
The Time Component
Lets take a step back and consider Tantek's claim that Mozilla has gotten very little traction in evangelizing multi-prefix or prefix-free development for the past year: Firefox for Android has been available since Oct. 2010 and stable for just 6 months. Opera Mobile on Android has been stable for just over a year. IE 9 (the only IE for mobile you could ever seriously consider not serving fallback experiences to) only appeared with Windows Phone 7.5 (aka "Mango"), shipped to users an entire 6 months ago.
And we expect webdevs to have updated all their (maintained) content? Never mind the tenuous correlation between the sorts of soft incompatibilities we're seeing at the hands of CSS and user attrition; the argument that even this lesser form of harm hasn't been blunted by evangelism appears suspect. Taking the incompatibilities seriously, I can quickly think of several other measures which are preferable to destroying the positive incentives towards standardization the prefix system creates (from least to most drastic):
- Continued evangelism to web developers with particular focus on major sites
- Political pressure on browser vendors to start dropping prefixes (i.e., we'd all be equally disadvantaged until users pick up the standard version)
- UA spoofing without prefix squatting
- Blacklists to trigger alternative identity (UA/prefixes) on a subset of sites
All of these are less blow-up-the-world than what MSFT, Mozilla, and Opera are proposing. It's not even an exhaustive list. I'm sure you can think of more. Why these have either been not considered or dismissed remains a mystery.
It's More Complicated Than That
In all of this, we're faced with an existential question: what right do web developers have to shoot themselves in the foot? Is there a limit to that right? What sort of damage do we allow when some aspect of the system fails or falls out of kilter for some period of time? It's a question with interesting parallels in other walks of life (for a flavor, substitute "web developers" with "banks").
Can we show active harm to other browsers from the use of prefixes? The data is at best unclear. Arguing that any harm rises to a level that would justifies destroying the prefixes system entirely is rash. I argued many of the reasons for this in my last post, but lets assume in our mental model that developers respond to incentives in some measure. If, concurrently with achieving as-yet un-managed distribution, Mozilla et. al. implement others' prefixes, what should we expect developers to do in response? After all, they will have reduced whatever tension might have been created by content that "looked wonky" and, where standards exist, will have reduced the incentive to switch to the standard version.
Now lets play the model one more turn of the wheel forward too: assume that Chrome or Safari (or both!) act in good faith and contemplate removing the
-webkit-* prefix support for standardized features at a quick clip...and Mozilla doesn't. You see how this quickly leads to a Mexican standoff: web developers won't stop using prefixed versions because those are the way you get 100% support (thanks to Mozilla's support for them); vendors won't un-prefix things because others who squat their prefixes will then have a compatibility advantage; and nobody will be keen to add new things behind prefixes because they can no longer be assumed to be experiments that can change. Lose, lose, lose.
Some on the other side of the debate are keen to cite game theory as a support for their course of action, but the only conclusion I can draw is that their analysis must be predicated on a set of user and developer motivations that are entirely unlike the observable world we inhabit.
A Call To Reason, Not Arms
Based on a better understanding of the landscape, what should the various parties do to make the world better for themselves now and in the long run and for the web as a platform?
- Web Devs: first, do no harm; test in multiple runtimes, pointedly including a "fallback". Then enhance with prefixes. Do not apologize for giving some (or even many) of your users a better experience. That, after all, is your job. But know this: prefixed properties are not supported, will go away, and when something you didn't test the fallback for falls over, it's your fault.
- Browser Vendors: invest in advocacy and distribution enhancing moves for your product before threatening to blow up effective standards policies. If you're going to implement a prefixed version, please have a different opinion or push to ram a standard through to Recommendation ASAP. Incompatible right-hand-sides help developers understand that things are still evolving. DO NOT squat on prefixes. It's both relative ineffective and will make developer's lives harder when they want to legitimately move to standard or support your prefixes.
- Vendor CSS WG Reps: get it through your heads, you're behind. It's not quaint and it's not excusable. The platform needs more powerful CSS features, and stat. It's long past time to start stealing good ideas from the pre-processors. Appeals to a lack of manpower to implement must never block others and shouldn't block standardization, so please stop making them. If you care about the platform's success, let those who are able and willing to take risks do so.
- The CSS WG (as a whole): get the lead out. It's not exclusively the W3C's fault that things are slow, but the current MTTR (Mean Time To Recommendation) is still glacial. It is unreasonable to expect vendors to drop prefix support immediately upon standardization, but the W3C has a role here to advocate for quick sunsetting. Daniel Glazman is, as ever, right on most of this, but more can be done to streamline the process post CR.
- The WebKit Project: Add build flags to allow WebKit-based products to enable/disable vendor prefix support independently.
- Chrome/Safari/Other-prefix-supporting-browsers: Sunset prefixes as soon as is practicable post-standardization. Similarly, don't ship prefixed features you're not willing to be on the hook for via your reps to the CSS WG. Disabling them may be painful, but it's the only good-faith thing to do.
I've left a lot out of this post, but it's too long already. I do truly hope it's the last I write on prefixes because, as I said up front, we have much bigger fish to fry. Stat. Prefixes do work, they're essential to delivering a future platform that can compete, and yes, they should be torn down at the same pace they're erected.
A few things that folks have asked about as tangents to this debate:
- It's never a good thing for there to be homogeneity in the experimentation phase. The explicit goal of the prefixes system is to enable diversity of early opinion and fast coalescing around the best answer, thereby enabling the writing of a standard which is likely to need less revising and iteration. Diversity provides some value, the market tests the alternatives, and we deliver the most value we can over time through the standard version. It has always been thus, but prefixes make it less risky...assuming we don't start stepping on everyone else's toes.
- If the reasoning behind prefixes is to set up and tear down large-scale experiments, iterate, and collect feedback then Lea's -prefix-free approach and PPK's
-beta-* proposals are equally counter-productive and should be avoided at all costs. Making prefixes less painful to use reduces the natural incentives for migrating to a standard while blindly assuming the same right-hand for a future standard version as we have for some prefixed versions is plainly idiotic. What were they thinking?
@-vendor-unlock is only slightly smarter, but in every possible way inferior to CSS Mixins. Would that the WG spent as much time on Mixins as they have on this prefix kerfuffle.
- Yes, I was in Paris when the CSS WG F2F was happening. No, I wasn't at the meetings. Duty (Chrome for Android) called.
- If you've read this far, congrats. You may be the only one. I've been assured by CSS WG delegates that nobody cares what I think, which statistically seems to just be rounding down by a tiny bit. Fair enough.
Update: Michael Mullany of Sencha adds epicly good, epicly long context about what causes developers to target UAs and what the incentives that'll change their minds about supporting a given browser really are.
Thanks to Frances Berriman, Jake Archibald, Tony Gentilcore, and Tab Atkins for reviewing earlier versions of this post. Errors of fact and form are all mine, however. Updated occasionally for clarity and to remove typos.
tl;dr version: Henri Sivonen's arguments against vendor prefixing for CSS properties focus on harm without considering value, which in turn has caused him to come to a non-sensical set of conclusions and recommendations. Progress is a process, and vendor prefixes have been critical in accelerating that process for CSS.
For a while now I've been hearing the meme resurface from CSS standards folks and a few implementers that "vendor prefixes have failed". I'd assumed this was either a (bad) joke or that it was one of those things that web developers would scoff at loudly enough to turn the meme recessive. I was wrong.
Henri Sivonen, Mozilla hacker extrordinare, has made the case directly and at length. Daniel Glazman, co-chair of the CSS WG posted a point-by-point response. If you have the patience, you should read both.
Lost in the debate between "browser people" and "spec people" is the the essential nature of what has happened with prefixes: they worked. From the perspective of a web developer, any first approximation of the history of vendor prefixes are pure win, even if only a fraction of the value that has been delivered behind them is attributable to prefixes un-blocking vendors from taking risks and shipping early.
Daniel's rebuttal to Henri gets a lot of things right, but he gives in on an essential point; by agreeing with Henri that vendor prefixes are "hurting web authors" he wites off the benefits that they've delivered -- namely the ability of vendors to get things out to devs in a provisional way that has good fallback and future-proofing properties and the ability for devs to build with/for the future in an opt-in, degradable way.
Rounded corners, gradients, animations, flex box, etc. are all design and experience enablers that developers have been able to take advantage of while waiting for the standards dust to settle, and thanks to W3C process, it takes a LONG time to to settle. Yes, that has some costs associated with it. Henri is very worried that browsers that aren't keeping up quickly will be "left behind" by webdevs who use only one vendor's prefix. But surely that's a lesser harm than not getting new features and not having the ability to iterate. And it provides incentive for following browsers to try to make a standard happen. What's not to love? More to the point, I just don't believe that this is a serious problem in practice. What front-ender in 2011 doesn't test on at least two browsers? Yes, yes, i'm sure such a retrograde creature exists, but they were going to be making non-portable content regardless of prefixes. Assuming you're testing fallback at all (e.g., by testing on more than one browser), prefixes not appearing on some browser are just the fallback case. CSS FTW! Webdevs who don't test on more than one browser...well, they're the ones hanging the noose around the neck of their own apps. Vendor prefixes no more enable this stupidity than the existence of the
User-Agent header. Compatibility is a joint responsibility and the best each side (browser, webdev) can hope of the other is some respect and some competence. Cherry picking egregious examples and claiming "it's hurting the web" seems, at a minimum, premature.
And how did we think we'd get a good standard, anyway? By sitting in a room in a conference center more often and thinking about it harder? Waiting on a handfull of early adopters to try something out in a tech demo and never stress it in practice? That's not a market test (see: XHTML2), it doesn't expose developers to the opportunities and tradeoffs that come with a new feature, and doesn't do anything to address the inevitable need to integrate feedback at some point.
Yes, we could go with Henri's suggestion that the first person to ship wins by default, never iterate on any designs, and avoid any/all first-mover disadvantage situations, but who among the browser vendors is perfect? And what would the predictable consequences be? I can only assume that Henri thinks that we'll end up in a situation where vendors coordinate with the CSS WG early to add new stuff, will design things more-or-less in the open, and will only ship features to stable (no flag) when they're sure of their design. That could happen at the limit, but I doubt it. Instead, the already fraught process of adding new features to the platform will be attempted by even fewer engineers. Who wants the responsibility for having to be perfect lest you screw the web over entirely? Fuck that noise, I'm gonna go work on a new database back-end or tune something to go faster. Browsers are made by smart people who have a choice of things to be working on, and any time you see a new platform feature, it probably came about as the result of an engineer taking a risk. Many times the engineers in a position to take those risks don't have a great sense for what good, idiomatic web platform features might be designed, so they'll need to tweak/iterate based on feedback. And feedback is painfully hard to extract from webdevs unless you've made something available in a tangible way such that they can use it and discover the limitations. Shipping things only to dev is perhaps a good idea for other aspects of the platform where we can't count on CSS's forgiving parsing behavior (the basis for prefixes). Syntax changes for JS and CSS seem like good examples. But for features that are primarily new CSS properties? Oy. Making the stakes even higher, reducing the ability to get feedback and iterate isn't going to lead to a harmonious world of good, fast standards creation. It's going to predictably reduce the amount of new awesome that shows up in the platform.
Prefixes are an enabler in allowing the necessary process of use, iteration, and consensus building to take place. Want fewer messes? There's an easy way to achieve that: try less stuff, add fewer features, and make each one more risky to add. That's Henri's prescription, wether or not he knows it, and the predictable result is a lower rate of progress -- advocating this sort of thing is much worse for the web and for developers than any of the harm that either Henri or Daniel perceive.
Which brings me to Henri's undifferentiated view of harm. His post doesn't acknowledge the good being done by prefixed implementations -- I get the sense he doesn't build apps with this stuff or it'd be obvious how valuable prefixed implementations are for work-a-day web app building -- instead focusing on how various aspects of the process of prefixed competition can be negative. So what? Everything worth having costs something. Saying that things "hurt the web" or "hurt web developers" without talking in terms of relative harm is just throwing up a rhetorical smoke screen to hide behind. If you focus only on the costs but write the benefits out of the story of course the conclusion will be negative. In many cases, the costs that Henri points out are correctly aligned with getting to a better world: having to type things out many times sucks, creating demand among webdevs for there to be a single, standardized winner. Having multiple implementations in your engine sucks, creating demand from vendors to settle the question and get the standards-based solution out to users quickly. Those are good incentives, driven by prices that are low but add up over time in ways that encourage a good outcome: a single standard implemented widely.
And as Sylvain Galineau pointed out, what looks like pure cost to one party might be huge value to another. I think there's a lot of that going on here, and we shouldn't let it go un-contextualized. The things that Henri sees as down-sides are the predictable, relatively minor, costs inherent in a process that allows us to make progress faster and distribute the benefits quickly, all while minimizing the harm. That he's not paying the price for not having features available to build with doesn't mean those opportunity costs aren't real and aren't being borne by webdevs every day. Being able to kill table and image based hacks for rounded corners is providing HUGE value, well ahead of the spec. Same for gradients, transitions, and all the rest. Calling prefixed implementations in the wild a bad thing needs to argue that the harm is greater than all of that value. I don't think Henri could make that case, nor has he tried.
I think the thing that most shocks me about Henri's point of view is that he's arguing against a process when in fact the motivating examples (transforms, gradients) have been sub-optimal in exactly the better-than-before ways we might have hoped for! Gradients, for example, saw a lot of changes and browsers had different ideas about what the syntax should be. Yes, it's harder to get a consistent result when you're trying to triangulate multiple competing syntaxes, but we got to use this stuff, get our hands dirty, and get most of the benefits of the feature while the dust settled. Huzzah! This is exactly> the way a functioning market figures out what's good! Prefixes help developers understand that stuff can and will change, and they clear the way for competition of ideas without burdening the eventual feature's users with legacy bagage tied to a single identifier.
So what about the argument that there might be content that doesn't (quickly?) adopt the non-prefixed version, or that vendors can't remove their prefixed implementations because content depends on it?
To the first, I say: show me a world where 90+% of users have browsers support the standard feature and I'll show you a world in which nobody (functionally) continues to include prefixes. That process is gated in part by the WG's ability to agree to a spec, and here I think there's real opportunity for the CSS WG to go faster. The glacial pace of CSS WG in getting things to a final, ratified spec is in part due to amazingly drawn-out W3C process, and in part a cultural decision on the part of the WG members to go slow. My view is that they should be questioning both of these and working to change them, not blaming prefixes for whatever messes are created in the interim.
As for removing prefixes, this is about vendors just doing it, and quickly. But the definition of "quickly" matters here. My view is that vendors should be given at least as long as it took to get a standard finalized from the introduction of their prefixed version for the removal process to be complete. So if Opera adds an amazing feature behind a
-o- prefix in early 2012 and the standard is finalized in 2014, the deprecation and eventual removal should be expected to take 2 years (2016). This has the nice symmetry of incentives that punish the WG for going slow (want to kill prefix impls? get the standard done) while allowing the vendors who took the biggest risks to provide the softest landings for their users. And it doesn't require that we simply go all-in on the first person's design to ship. Yes, there will be mounting pressure to get something done, but that's good too!
The standards process needs to lag implementations, which means that we need spaces for implementations to lead in. CSS vendor prefixes are one of the few shining examples of this working in practice. It's short-term thinking in the extreme to either flag the costs associated with them as either justifying their removal or even suggesting that the costs are too high.
And webdevs, always be skeptical when someone working on an implementation or a spec tells you that something is "hurting the web" when your experience tells you otherwise. The process of progress needs more ways to effectively gauge webdev interest, collect feedback, and test ideas. Not fewer or narrower channels.