A Funny Thing Happened On The Way To The Future…

There’s a post on the fetch() API by Ludovico Fischer doing the rounds. As a co-instigator for adding the API to the platform, it’s always a curious thing to read commentary about an API you designed, but this one more than most. It brings together the epic slog that was the Promises design (which we also waded into in order to get Service Workers done and which will improve with await/async) with the in-process improvements that will come from Streams and it mixes it with a dollop of FUD, misunderstanding, and derision.

This sort of article is emblematic of a common misunderstanding. It expresses (at the end) a latent belief that there is some better alternative available for making progress on the web platform than to work feature-by-feature, compromise-by-compromise towards a better world. That because the first version of fetch() didn’t have everything we want, it won’t ever. That there was either some other way to get fetch() shipped or that there was a way to get cancellation through TC39 in ’13. Or that subclassing is somehow illegitimate and “non-standard” (even though the subclass would clearly be part of the Fetch Standard).

These sorts of undirected, context-free critiques rely on snapshots of the current situation (particularly deficiencies thereof) to argue implicitly that someone must be to blame for the current situation not yet being as good as the future we imagine. To get there, one must apply skepticism about all future progress; “who knows when that’ll be done!” or “yeah, fine, you shipped it but it isn’t available in Browser X!!!”.

They’re hard to refute because they’re both true and wrong. It’s the prepper/end-of-the-world mentality applied to technological progress. Sure, the world could come to a grinding halt, society could disintegrate, and the things we’re currently working on could never materialize. But, realistically, is that likely? The worst-case-scenario peddlers don’t need to bother with that question. It’s cheap and easy to “teach the controversy”. The appearance of drama is its own reward.

Perhaps most infuriatingly, these sorts of cheap snapshots laced with FUD do real harm to the process of progress. They make it harder for the better things to actually appear because “controversy” can frequently become a self fulfilling prophesy; browser makers get cold feet for stupid reasons which can create negative feedback loops of indecision and foot-gazing. It won’t prevent progress forever, but it sure can slow it down.

I’m disappointed in SitePoint for publishing the last few paragraphs in an otherwise brilliant article, but the good news is that it (probably) won’t slow Cancellation or Streams down. They are composable additions to fetch() and Promises. We didn’t block the initial versions on them because they are straightforward to add later and getting the first versions done required cuts. Both APIs were designed with extensions in mind, and the controversies are small. Being tactical is how we make progress happen, even if it isn’t all at once. Those of us engaged in this struggle for progress are going to keep at it, one feature (and compromise) at a time.

Progressive Web Apps: Escaping Tabs Without Losing Our Soul

It happens on the web from time to time that powerful technologies come to exist without the benefit of marketing departments or slick packaging. They linger and grow at the peripheries, becoming old-hat to a tiny group while remaining nearly invisible to everyone else. Until someone names them.

This may be the inevitable consequence of a standards-based process and unsynchronized browser releases. We couldn’t keep a new feature secret if we wanted to, but that doesn’t mean anyone will hear about it. XMLHTTPRequest was available broadly since IE 5 and in Gecko-based browsers from as early as 2000. “AJAX” happened 5 years later.

This eventual adding-up of new technologies changes how we build and deliver experiences. They succeed when bringing new capabilities while maintaining shared principles:

  • URLs and links as the core organizing system: if you can’t link to it, it isn’t part of the web
  • Markup and styling for accessibility, both to humans and search engines
  • UI Richness and system capabilities provided as additions to a functional core
  • Free to implement without permission or payment, which in practice means standards-based

Major evolutions of the web must be compatible with it culturally as well as technically.

Many platforms have attempted to make it possible to gain access to “exotic” capabilities while still allowing developers to build with the client-side technology of the web. In doing so they usually jettison one or more aspect of the shared value system. They aren’t bad — many are technically brilliant — but they aren’t of the web:

These are just the ones that spring to mind offhand. I’m sure there have been others; it’s a popular idea. They frequently give up linkability in return for “appiness”: to work offline, be on the home screen, access system APIs, and re-engage users they have required apps be packaged, distributed through stores, and downloaded entirely before being experienced.

Instead of clicking a link to access the content you’re looking for, these systems make stores the mediators of applications which in turn mediate and facilitate discovery for content. The hybridzation process generates applications which can no longer live in or with the assumptions of the web. How does one deploy to all of these stores all at once? Can one still keep a fast iteration pace? How does the need to package everything up-front change your assumptions and infrastructure? How does search indexing work? It’s a deep tradeoff that pits fast-iteration and linkability against offline and store discovery.

Escaping the Tab: Progressive, Not Hybrid

But there is now another way. An evolution has taken place in browsers.

Over dinner last night, Frances and I enumerated the attributes of this new class of applications:

  • Responsive: to fit any form factor
  • Connectivity independent: Progressively-enhanced with Service Workers to let them work offline
  • App-like-interactions: Adopt a Shell + Content application model to create appy navigations & interactions
  • Fresh: Transparently always up-to-date thanks to the Service Worker update process
  • Safe: Served via TLS (a Service Worker requirement) to prevent snooping
  • Discoverable: Are identifiable as “applications” thanks to W3C Manifests and Service Worker registration scope allowing search engines to find them
  • Re-engageable: Can access the re-engagement UIs of the OS; e.g. Push Notifications
  • Installable: to the home screen through browser-provided prompts, allowing users to “keep” apps they find most useful without the hassle of an app store
  • Linkable: meaning they’re zero-friction, zero-install, and easy to share. The social power of URLs matters.

These apps aren’t packaged and deployed through stores, they’re just websites that took all the right vitamins. They keep the web’s ask-when-you-need-it permission model and add in new capabilities like being top-level in your task switcher, on your home screen, and in your notification tray. Users don’t have to make a heavyweight choice up-front and don’t implicitly sign up for something dangerous just by clicking on a link. Sites that want to send you notifications or be on your home screen have to earn that right over time as you use them more and more. They progressively become “apps”.

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Frances called them “Progressive Open Web Apps” and we both came around to just “Progressive Apps”. They existed before, but now they have a name.

What Progressive Apps Look Like

Taking last year’s Chrome Dev Summit site as an example, we can see the whole flow in action (ht: Paul Kinlan):

  1. The site begins life as a regular tab. It doesn’t have super-powers, but it is built using Progressive App features including TLS, Service Workers, Manifests, and Responsive Design.
  2. The second (or third or fourth) time one visits the site — roughly at the point where the browser it sure it’s something you use frequently — a prompt is shown by the browser (populated from the Manifest details)
  3. Users can decide to keep apps to the home screen or app launcher
  4. When launched from the home screen, these apps blend into their environment; they’re top-level, full-screen, and work offline. Of course, they worked offline after step 1, but now the implicit contract of “appyness” makes that clear.

Animation of the Progressive App installation of the offer and keep flow for Chrome Dev Summit.

Here’s the same flow on Flipboard today:

Progressive Apps are web apps, they begin life in a tab. Here we see flipboard.com in Chrome for Android with regular tab treatment
Progressive Apps are web apps, they begin life in a tab. Here we see flipboard.com in Chrome for Android with regular tab treatment.
When users engage with Progressive Apps enough, browsers offer prompts that ask users if they want to keep them. To avoid spaminess, this doesn't happen on the first load.
When users engage with Progressive Apps enough, browsers offer prompts that ask users if they want to keep them. To avoid spaminess, this doesn’t happen on the first load.
If the user accepts, the user's flow isn't interrupted.
If the user accepts, the user’s flow isn’t interrupted.
The app shortcut appears on the homescreen or launcher of the OS.
The app shortcut appears on the homescreen or launcher of the OS.
When launched, Progressive Apps can choose to be full-screen.
When launched, Progressive Apps can choose to be full-screen.
Progressive Apps are top-level activities in the OS's application switcher.
Progressive Apps are top-level activities in the OS’s application switcher.

The Future

Today’s web development tools and practices don’t yet naturally support Progressive Apps, although many frameworks and services are close enough to be usable for making Progressive Apps. In particular, client-side frameworks that have server-rendering as an option work well with the model of second-load client-side routing that Progressive Apps naturally adopt as a consequence of implementing robust offline experiences.

This is an area where thoughtful application design and construction will give early movers a major advantage. Full Progressive App support will distinguish engaging, immersive experiences on the web from the “legacy web”. Progressive App design offers us a way to build better experiences across devices and contexts within a single codebase but it’s going to require a deep shift in our understanding and tools.

Building immersive apps using web technology no longer requires giving up the web itself. Progressive Apps are our ticket out of the tab, if only we reach for it.

Thanks to Frances Berriman, Brian Kardell, Jake Archibald, Owen Cambpell-Moore, Jan Lehnardt, Mike Tsao, Yehuda Katz, Paul Irish, Matt McNulty, and John Allsopp for their review and suggestions on this post.

Cross-posted at Medium.

PSA: Service Workers are Coming


This post describes the potential amplification of existing risks that Service Workers bring for multi-user origins where the origin may not fully trust the content or, in which, users should not be able to modify each other’s content.

Sites hosting multiple-user content in separate directories, e.g. /~alice/index.html and /~bob/index.html, are not exposed to new risks by Service Workers. See below for details.

Sites which host content from many users on the same origin at the same level of path separation (e.g. https://example.com/alice.html and https://example.com/bob.html) may need to take precaution to disable Service Workers. These sites already rely on extraordinary cooperation between actors and are likely to find their security assumptions astonished by future changes to browsers.


Service Workers are a new feature that are coming to the Web Platform very soon.

Like AppCache, Service Workers are available without user prompts and enable developers to create meaningful offline experiences for web sites. They are, however, strictly more powerful than AppCache.

To mitigate the risks associated with request interception, Service Workers are only available to use under the following restrictions:

  • Service Workers are restricted to secure origins. E.g., http://acme.com/ can never have a Service Worker installed, whereas https://acme.com can. If you do not serve over SSL/TLS, service workers do not impact your site.
  • Service Worker scripts must be hosted at the same origin. E.g., https://acme.com/index.html can only register a Service Worker script if that script is also hosted at https://acme.com. Scripts included by the root Service Worker via importScripts() may come from other origins, but the root script itself cannot be registered against another origin. Redirects are also treated as errors for the purposes of SW script fetching to ensure that attackers cannot turn transient ownership into long-term control.
  • Service Workers are restricted by the path of the Service Worker script unless the Service-Worker-Scope: ... header is set.
    • Service Workers intercept requests for documents and their sub-resources. These documents are married to SW’s based on longest-prefix-match of the path component of the script which is registered with the scopes.
    • For example, if https://acme.com/thinger/index.com registers a SW hosted at https://acme.com/thinger/sw.js, it cannot by default intercept requests for https://acme.com/index.html
    • This example may, however, respond for more-specific document requests like https://acme.com/thinger/blargity/index.html.
    • If the script is instead located at https://acme.com/sw.js, the registration will allow interception for all navigations at https://acme.com/.
    • This means that sites hosting multiple-user content in separated directories, e.g. /~alice/ and /~bob/, are not exposed to new risks by Service Workers.
    • Sites which host multiple user’s content in the same directories may wish to consider disabling Service Workers (see below).
    • Servers can break this restriction on allowed scope by sending a Service-Worker-Scope: ... header, where the value of the header is the allowed path (e.g., /). This feature will not arrive for Chrome until version 41 (6 weeks after the original release which adds support for Service Workers).
  • Service Worker scripts must be served with valid JavaScript mime types, e.g. text/javascript. Resources served with marginal Content-Type values, e.g. text/plain, will NOT be respected as valid Service Worker scripts.

In addition to these restrictions, Service Workers include features to help site operators understand Service Worker usage on their origins. The most important of these is the Service-Worker: script header which is appended to every request for script files which are intended for use as Service Workers.

This feature allows site owners, via logs and server-side directives, to:

  • Audit use of Service Workers on an origin
  • Control or disable Service Workers, either globally or by enforcing whitelists

Disabling Service Workers is straightforward. Here’s an example snippet for an Apache .htaccess file:

<IfModule mod_setenvif.c>
  SetEnvIf Service-Worker script swrequest
    Require all granted
    Require not env swrequest

For Nginx the recipe might be:

location / {
  if ($http_service_worker) {
    return 403;


If you run a site which hosts untrusted third-party content on a single origin over SSL/TLS, you should ensure that you:

  • Disable Service Workers at your origin by blocking requests which include the Service-Worker: script header. This is easily accomplished using global server configuration (e.g. httpd.conf directives).
  • If you wish to allow Service Workers, Begin auditing use of Service Workers on your origin as requests which include Service-Worker: script may indicate other problems with content hosting (e.g., if you do not mean to be hosting active HTML content but are doing so incidentally).
  • Move to a sub-domain-per-user model as soon as possible, e.g. https://alice.example.com instead of https://example.com/~alice. The browser-enforced same-origin model is fundamentally incompatible with serving content from multiple entities at the same origin. For instance, sites which can run on the same origin are susceptible to easy-to-make mixups with Cookie paths and storage poisoning attacks via Local Storage, WebSQL, IndexedDB, the Filesystem API, etc. The browser’s model for how to separate principals relies almost exclusively on origins and we strongly recommend that you separate users by sub-domain (which is a different origin) so that future changes to browsers do not cause harmful interactions with your hosting setup.

Thanks to Kenji Baheux, Joel Weinberger, Devdatta Akhawe, and Matt Falkenhagen for their review and suggestions. All errors are mine alone, however.

Uncomfortably Excited

Jeremy Keith is wringing his hands about Web Components. I likewise can’t attend the second Extensible Web Summit and so have a bit of time to respond here.

Full disclosure: I helped design Web Components and, with Dimitri Glazkov and Alex Komoroske, helped lead the team at Google that brought them from concept to reality. I am, as they say, invested.

Jeremy’s argument, if I can paraphrase, is that people will build Web Components and this might be bad. He says:

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

Piffle and tosh.

90% of JQuery plugins don’t account for 90% of the use of JQuery plugins. The distribution of popularity is as top-heavy there as it is in the rest of the JS UI library landscape in which developers have agency to choose components on quality/price/etc. The implicit argument seems willfully ignorant of the recent history of JS libraries wherein successful libraries receive investment and scrutiny, raising their quality level. Indeed, the JS library world is a towering example of investment creating better results: the winners are near-universally compatible with progressive enhancement, a11y, and i18n. If there’s any lesson to be learned it should be that time and money can improve outcomes when focused on important libraries. Doing general good, then, comes down to aiming specific critique and investment at specific code. A corollary is that identifying which bits will have the most influence in raising the average is valuable.

Web Components do this cycle a solid: because they are declarative by default we don’t need to rely on long-delayed developer surveys to understand what is in wide use. Crawlers and browsers that encounter custom elements can help inform our community about what’s most popular. This will allow better and more timely targeting of investment and scrutiny.

This is worlds different than browser feature development where what’s done is done and can scarcely change.

A particularly poor analogy is deployed in the argument that Web Components are somehow a rehash of Apple’s introduction of <meta name="viewport" value="...">. This invokes magical behavior in some browsers. We’re meant to believe that this is a cautionary tale for an ecosystem of intentionally included component definitions which cannot introduce magic and which are not bound by a closed vocabulary that is defacto exclusive and competitive — other browsers couldn’t easily support the same incantation and provide different behavior without incurring developer wrath. The difference in power and incentives for browser vendors and library authors ought to make the reader squirm, but the details of the argument render it entirely limp. There is no reasonable comparison to be drawn.

This isn’t to ignore the punchline, which is that copy/paste is a powerful way of propagating (potentially maladaptive) snippets of content through the web. Initial quality of exemplars does matter, but as discussed previously, ignoring the effects that compound over time leads to a poor analysis.

Luckily we’ve been thinking very hard about this at Google and have invested heavily in Polymer and high-quality Material Design components that are, as I write this, undergoing review and enhancement for accessibility. One hopes this will ease Jeremy’s troubled mind. Seeding the world with a high-quality, opinionated framework is something that’s not only happening, it’s something we’ve spent years on in an effort to provide one set of exemplars that others can be judged against.

Overall, the concerns expressed in the piece are vague, but they ought not be. The economics of the new situation that Web Components introduce are (intentionally) tilted in a direction that provides ability for cream to rise to the top — and for the community to quickly judge if it smells off and do something about it.

Yes, messes will be made. But that’s also the status quo. We get by. The ecosystem sets the incentives and individuals can adapt or not; the outcomes are predictable. What matters from here is that we take the opportunity to apply Jeremy’s finely-tuned sense of taste and focus it on specific outcomes, not the entire distribution of possible outcomes. It’s the only way to make progress real.

PS: Extensibility and the Manifesto aren’t about about Web Components per sae. Yes, we extracted these principals from the approach our team took towards building Web Components and the associated technologies, but it cuts much, much deeper. It’s about asking “how do these bits join up?” when you see related imperative and declarative forms in the platform. When there’s only declarative and not imperative, extensibility begs the question “how does that thing work? how would we implement it from user-space?”. Web Components do this for HTML. Service Workers do this for network caching and AppCache. Web Audio is starting to embody this for playing sound in the platform. There’s much more to do in nearly every area. We must demand the whole shebang be explained in JS or low-level APIs (yes, I AM looking at you, CSS).