Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Doing Science On The Web

Cross-posted at Medium

This post is about vendor prefixes, why they didn’t work, and why it's toxic not to be able to launch experimental features. Also, what to do about it.

Premature Compatibility

Vendor prefixes are a very sore topic, and one where I’ve disagreed with the overwhelming consensus. In the heat of the ‘11–12 debate ("prefixpocalypse") I tried to outline a hierarchy of the web platform needs:

  1. Meeting developer & user experience needs with new features
  2. Eventual interoperability for successful features
  3. Minimizing harm to the ecosystem from experiments-gone-wrong

The debate and subsequent (conflicting) prohibitions & advice centered on the third point: minimizing pollution.

Recall that in 2012, Google, Apple, Blackberry, and a host of other vendors were all shipping browsers based on a single CSS engine (WebKit) without changing the -webkit- prefixes to be vendor-specific; e.g. -cr-, -apple-, or -bb-.

As a result, many experimental features experienced premature compatibility. Developers could get the benefits of broad feature support without a corresponding standard.

This backed non-WebKit-based browsers into a terrible choice: "camp" on the other vendor’s prefixed behavior to render content correctly or suffer a loss of user and developer loyalty.

This illustrates what happens when experiments inadvertently become critical infrastructure. It has happened before. Over, and over, and over again.

Prefixes were supposed to allow experimentation while discouraging misuse, but in practice they don’t. Prefixes "look" ugly and the thought was that ugliness  —  combined with an aversion to proprietary gunk by web developers —  would cause sites to cease using them once standards are in place and browsers implement. But that’s not what happens.

Useful features that live a long time in the "experimental" phase tend to get “burned in”, particularly if the browsers supporting them are widely used. Breaking existing content is the third rail for browsers; all of their product instincts and incentives keep them from doing it, even if the breakage comes from retracting proprietary features. This means that many prefixed properties continue to work long after standard versions are added. Likewise, sites and pages that work with prefixes are all-too-easy for web developers to write and abandon. It’s unsettling to remove a prefix when you might break a user with an old browser. Maintenance of both sites and browsers rarely subtracts, but the theory of prefixes hinges on subtraction.

Everyone who uses prefixes, both browser engineers and web developers, start down the path thinking they’ll stop at some point. But for predictable reasons, that isn’t what happens. Good intentions are not an effective prophylactic. Not for web developers or browser makers (to say nothing of amorous teens).

This situation is the natural consequence of platform and developer time-scales that are out of sync. Browsers move more slowly than sites (at the micro scale), but sites must contend with huge browser diversity and are therefore much more conservative about removing "working" code than browser engineers expected.

Now What?

Years after Prefixpocalypse, everyone who works on a browser understands that prefixes haven’t succeeded in minimizing harm, yet vendors proudly announce new prefixed features and developers blithely (ab)use them.

Clearly, a need for new features trumps interoperability and pollution concerns. This is natural and, perhaps even healthy. A static web, one which doesn’t do more to make lives better is one that doesn’t deserve to thrive and grow. In technology, as in life, there is no stasis, only various speeds of growth or decay.

Browsers could stop prefix pollution by simply vowing not to add features. This neatly analyses the problem (some experiments don’t work out, and some get out of hand) and proposes a solution (no experimentation), but as H.L. Mencken famously wrote:

…there is always a well-known solution to every human problem — neat, plausible, and wrong.

We have already run a natural experiment in this area. At the low point after the first browser war, Microsoft (temporarily) shrink from the challenge of building the web into a platform. Meanwhile IE 6’s momentum assured its place as the boat-anchor-browser. Between 2002 and 2006, the web (roughly) didn’t add any new features. Was that better? Not hardly. I’m glad to be done with 9-table-cell image hacks to accomplish rounded corners. Not all change is progress, but without change there is no progress.

Or, put better by W3C Memes:

One does not simply ship no new features for a year and remain competitive.
One does not simply ship no new features for a year and remain competitive.

The web does need new features, and we’d like good versions of them — fewer document.alls, WebSQLs and AppCaches, thanks.

We know from experience developing software of all kinds that more iteration yields better results. Experimentation, chances to learn, and opportunities to try alternatives are what separate good ideas from great products. Members of the Google Gears team report they considered building something like Service Workers. Instead they built an AppCache style system which didn’t work in all the ways AppCache didn’t work (which they couldn’t have known at the time). It shouldn’t have taken 6+ years to course-correct. We need to be able to experiment and iterate. Now that we understand the problems with prefixes, we need another mechanism.

Experiments That Stay Experiments

Prefixpocalypse happened because experiments escaped the lab. Wide-scale use of experimental properties isn’t healthy. Because prefixed properties were available to any site (no matter how large), it was straightforward for the killer combination of broad browser support and major site usage to ensure that compatibility would work against ever ending the experiment. The key to doing better, then, is to limit the size of the experimental population.

The way prefixes were run was like making a new drug available over the counter as soon as a promising early trial was conducted, skipping animal, human, and large-scale clinical trials. Of course that would be ludicrous; "first do no harm" requires starting with a small population, showing efficacy, gathering data about side effects, and iterating.

The missing ingredient has been the ability to limit the experimental population.

Experiments can run for a fixed duration without fear of breaking the web if we can be sure that they never imperilled the whole web in the first place. Short duration and small, committed test populations allow for more iteration which should, in the end, lead to better features.

Web developer feedback needs to be the most important voice in the standards process, and we’ll never get there until there’s more ability for web developers to participate in feature evolution.

Experimental outcomes are ammo for the standards development process; in the best-case they can provide good evidence that a feature is both needed and well-designed.

Putting evidence at the core of web feature and standards development is a 180° change from the current M.O., but one we sorely need.

So how do we get there?

Some mechanisms I’ve thought through and rejected (with reasons):

The Chrome Team has been thinking about this problem for the past several years, including conversations with other vendors, and those ideas have congealed into a few interlocking mechanisms that haven’t been rejected:

  1. Developer registration & usage keys.
    A large part of the reason it’s difficult to change developer behavior about use of experimental features is that _it’s hard to find them!
    Who would you call to talk about use of some prefixed CSS thing on facebook.com? I don’t know either. Having an open communication channel is critical to learning how features are working (or not) in the real world. To that end, new experimental features will be tied to specific origins using keys vended by a developer program; sites supply the keys to the browser through header/meta tags, enabling the features dynamically. Registration for the program will probably require giving a (valid) email address and agreeing to answer survey questions about experimental features. Because of auto-self-destruct (see below), there’s less worry that these experiments will be abused to provide proprietary features to "preferred" origins. Public dashboards of running experiments and users will ensure transparency to this effect.
  2. Global usage caps.
    The Blink project generally uses a ~0.03% usage threshold to decide if it’s plausible to remove a feature. Experimenters might use our Use Counter infrastructure and RAPPOR to monitor use. Any feature that breaches this threshold can automatically close the experiment to new users and, if any individual user goes above ~0.01% (global) use, a config update can be pushed to throttle use on that site.
  3. Feature auto-self-destruct.
    Experimental features should be backed by a process that’s trying to learn. To enable this, we’re going to ensure that each version of an experimental feature auto-self-destructs, tentatively set at 12–18 weeks per experiment. New iterations which are designed to test some theory can be launched once an experiment has finished (but must have some API or semantic difference, preferably breaking). Sites that want to opt into the next experiment and were part of a previous group will be asked survey questions in the key-update process (which is probably going to be a requirement for access to future experimental versions). Experiments can overlap to provide continuity for end-users who are willing to move to the next-best-guess and provide feedback.

We’re also going to work to ensure that the surfaced APIs are done in a responsible way, including feature-detection where possible. These properties add up to a solution that gives us confidence that we can create Ctrl-Z for web features without damaging users or sites.

In discussions with our friends in the community and at other browser vendors we’ve thought through alternative ways to throttle or shrink the experimental population: randomness in API names, limiting APIs to postMessage style calling, or shortening experiment lifetimes. As Chrome is going first, we’ll be iterating on the experimental framework to try to strike the right balance that allows enough use to learn from but not so much that we inadvertently commit to an API. We'll also be sharing what we learn.

My hope is that other browsers implement similar programs and, as a corollary, cease use of prefixes. If they do, I can imagine many future areas for collaboration on developing and running these experiments. That said, it’s desirable to for different browsers to be trying different designs; we learn more through diversity than premature monoculture.

Moving faster and building better features don’t have to be in tension; we can do better. It’s time to try.

Thanks to Owen Campbell-Moore, Joe Medley, Jeff Yasskin, Adrian Bateman, Jake Archibald, Ian Clelland, Michael Stillwell, Addy Osmani, and Chris Wilson, and Paul Irish for their invaluable feedback on drafts of this post.

A Funny Thing Happened On The Way To The Future...

There's a post on the fetch() API by Ludovico Fischer doing the rounds. As a co-instigator for adding the API to the platform, it's always a curious thing to read commentary about an API you designed, but this one more than most. It brings together the epic slog that was the Promises design (which we also waded into in order to get Service Workers done and which will improve with await/async) with the in-process improvements that will come from Streams and it mixes it with a dollop of FUD, misunderstanding, and derision.

This sort of article is emblematic of a common misunderstanding. It expresses (at the end) a latent belief that there is some better alternative available for making progress on the web platform than to work feature-by-feature, compromise-by-compromise towards a better world. That because the first version of fetch() didn't have everything we want, it won't ever. That there was either some other way to get fetch() shipped or that there was a way to get cancellation through TC39 in '13. Or that subclassing is somehow illegitimate and "non-standard" (even though the subclass would clearly be part of the Fetch Standard).

These sorts of undirected, context-free critiques rely on snapshots of the current situation (particularly deficiencies thereof) to argue implicitly that someone must be to blame for the current situation not yet being as good as the future we imagine. To get there, one must apply skepticism about all future progress; "who knows when that'll be done!" or "yeah, fine, you shipped it but it isn't available in Browser X!!!".

They're hard to refute because they're both true and wrong. It's the prepper/end-of-the-world mentality applied to technological progress. Sure, the world could come to a grinding halt, society could disintegrate, and the things we're currently working on could never materialize. But, realistically, is that likely? The worst-case-scenario peddlers don't need to bother with that question. It's cheap and easy to "teach the controversy". The appearance of drama is its own reward.

Perhaps most infuriatingly, these sorts of cheap snapshots laced with FUD do real harm to the process of progress. They make it harder for the better things to actually appear because "controversy" can frequently become a self fulfilling prophesy; browser makers get cold feet for stupid reasons which can create negative feedback loops of indecision and foot-gazing. It won't prevent progress forever, but it sure can slow it down.

I'm disappointed in SitePoint for publishing the last few paragraphs in an otherwise brilliant article, but the good news is that it (probably) won't slow Cancellation or Streams down. They are composable additions to fetch() and Promises. We didn't block the initial versions on them because they are straightforward to add later and getting the first versions done required cuts. Both APIs were designed with extensions in mind, and the controversies are small. Being tactical is how we make progress happen, even if it isn't all at once. Those of us engaged in this struggle for progress are going to keep at it, one feature (and compromise) at a time.

Older Posts

Newer Posts