That PE Thang

The interwebs, they are aboil!

People worthy of respect are publicly shaming those who don’t toe the Progressive Enhancement line. Heroes of the revolution have taken to JavaScript’s defense. The rebuttals are just as compelling.

Before we all nominate as Shark or Jet, I’d like to take this opportunity to point out the common ground: both the “JS required” (“requiredJS”?) and “PE or Bust” crews implicitly agree that HTML+CSS isn’t cutting it.

PE acknowledges this by treating HTML as scaffold to build the real meaning on top of. RequiredJS brings its own scaffolding. Apart from that, the approaches are largely indistinguishable.

In ways large and small, the declarative duo have let down application developers. It’s not possible to stretch the limited vocabularies they provide to cover enough of the relationships between data, input, and desired end states that define application construction. You can argue that they never were, but I don’t think that’s essentially true. Progressive Enhancement, for a time, might have gotten you everywhere you wanted to be. What’s different now is that more of us want to be places that HTML+CSS are unwilling or incapable of taking us.

The HTML data model is a shambles, the relationship between data, view, and controller (and yes, browsers have all of those things for every input element) is opaque to the point of shamanism: if I poke this attribute this way, it does a thing! Do not ask why, for there is no why. At the macro scale, HTML isn’t growing any of the “this is related to that, and here’s how” that the humble <a href="..."> provided; particularly not about how our data relates to its display. Why, in 2013, isn’t there a (mostly) declarative way to populate a table from JSON? Or CSV? Or XML? It’s just not on the agenda.

And CSS…it’s so painful and sad that mentioning it feels like rubbing salt in a wound that just refuses to heal.

Into these gaps, both PE and requiredJS inject large amounts of JS, filling the yawning chasm of capability with…well…something. It can be done poorly. And it can be done well. But for most sites, widgets, apps, and services the reality is that it must be done.

Despite it all, I’m an optimist (at least about this) because I see a path that explains our past success with declarative forms and provides a way for them to recapture some of their shine.

Today, the gap between “what the built-in-stuff can do” and “what I need to make my app go” is so vast at the high-end that it’s entirely reasonable for folks like Tom to simply part ways with the built-ins. If your app is made of things that HTML doesn’t have, why bother? At the more-content-than-app end, we’re still missing ways to markup things that microformats and have given authors for years: places, people, events, products, organizations, etc. But HTML can still be stretched very nearly that far, so long as the rest of the document is something that HTML “can do”.

What’s missing here is a process for evolving HTML more quickly in response to evidence that it’s missing essential features. To the extent that I cringe at today’s requiredJS sites and apps, it’s not because things don’t work with JS turned off (honestly, I don’t care…JS is the very lowest level of the platform…of course turning it off would break things), but because stuffing the entire definition of an application into a giant JS string deprives our ecosystem of evidence that markup could do it. It’s not hard to imagine declarative forms for a lot of what’s in Ember. Sure, you’ll plumb quite a bit through JS when something isn’t built into the framework or easily configured, but that’s no different than where the rest of the web is.

Web Components are poised to bridge this gap. No, they don’t “work” when JS is disabled, but it’s still possible to hang styles off of declarative forms that are part of a document up-front. Indeed, they’re the ultimate in progressive enhancement.

I’d be lying if I were to claim that bringing the sexy back to markup wasn’t part of the plan for Web Components the whole time. Dimitri’s “you’re crazy” look still haunts me when I recall outlining the vision for bringing peace to the HTML vs. JS rift by explaining how HTML parsing works in in terms of JS. It has been a dream of mine that our tools would uncover (enabling understanding of) what’s so commonly needed that HTML should include it in the next iteration. In short, to enable direct evolution. To do science, not fumbling alchemy.

It’s key to understand is that “PE or Bust” vs. “requiredJS” isn’t a battle anyone can win today. The platform needs to give us a way to express ourselves even when it hasn’t been prescient of our needs — which, of course, it can’t be all the time. Until now, there hasn’t been that outgas, so of course we reach out and use the turing-complete language at our finger tips to go re-build what we must to get what we want.

The developer economics will be stacked that way until Web Components are the norm. Call it PE++ (or something less clunky, if you must), but the war is about to be over. PE vs. requiredJS will simply cease to be an interesting discussion.

Can’t wait for that day.


  1. richard
    Posted September 9, 2013 at 10:41 am | Permalink


  2. Posted September 9, 2013 at 10:55 am | Permalink

    …brings all the boys to the yard? Is a datapoint on the continuum I outlined? Is a shapely turnip?

    Brevity may not, in all things, be an virtue.

  3. Posted September 9, 2013 at 2:17 pm | Permalink

    +1 AngularJS is a shapely turnip.

    Great read, Alex. How do you feel about as a future-proof way of structuring code in preparation for eventual “Web Components” support?

  4. Posted September 9, 2013 at 2:36 pm | Permalink

    hey Dan, is a wonderful discovery mechanism, and I think we absolutely need discovery, rating, and feedback mechanisms as we collectivley build out the first wave of components — and we’ll need them to get more sophisticated as we press onward. For instance, indicating support for a Web Component via Polymer or Brick would help developers wade more effectively through the choices.

    The dynamic, though, is still likely to be “winner take all”, and I don’t think that’s a bad thing. We saw it in the “ajax libraries war” and are likely to again in the current battle for developer mind share about full-stack front-end frameworks. It’s just how it goes. What we need, as a follow up to the winner taking it all, is to put some of it back into the platform. Facilitating that process is going to require data. It’s the sort I’m hoping to collect via Meaningless:


  5. Marcus
    Posted September 10, 2013 at 2:51 am | Permalink

    Progressive enhancement isn’t just CSS on top of HTML with JS on top of that. It’s not a matter of JS on or off either.

    You can have a JS only app which uses progressive enhancement.

    You start off with an empty document (perhaps it explains a little about the application and how if you’re seeing the message then your browser doesn’t support the app)

    Then you use feature detection before kicking off the application.

    Remember HTML and CSS features are ignored if unrecognised – JavaScript features error if unrecognised.

  6. Posted September 10, 2013 at 5:00 pm | Permalink

    Hey Marcus,

    I think you misunderstand what people mean by PE. The definitive article is here:

    Jeremy Keith is also widely cited. In both cases, the concept is largely that progressive enhancement happens regarding documents that were meaningful to start with.


  7. kevin c
    Posted September 12, 2013 at 9:17 am | Permalink

    Web Components and es6/7 could first find an important initial audience with Chrome apps. Especially if these apps could run as first class citizens on Android – that is have the Android look and feel/functionality. Basically code once for Google platforms – rather than both dalvik and html5/js.

    Then broaden out to the web when the other implementors catch up. This is what happened with Web 2.0/ajax. First an IE4/5 thing – then Firefox and Webkit emerged.

  8. Marcus
    Posted September 13, 2013 at 12:16 am | Permalink

    Thanks for responding Alex.

    I think you actually misunderstand.

    Let’s just put the term to one side for a moment and focus on what happens in browsers that have JS turned on and don’t support a particular JS method; native or host – doesn’t matter.

    Running that code most likely leaves a broken unusable page. This can be mitigated against using feature detection/testing.

    If feature detection passes we “enhance” the page with that (set of) feature(s) otherwise we bail out leaving a working page. This can be applied to JS only apps too like I alluded to above.

    So even if the entire app is made of canvas or whatever, you can start off with a working document that explains that their browser doesn’t support the app, leaving some contact details or some instructions rather than a broken page leaving the user none-the-wiser etc. And it’s not often an app we build in general is something that can only work with JS.

    I hope this helps.

  9. Posted September 13, 2013 at 4:34 pm | Permalink

    I got what you meant the first time and wasn’t questioning that what you described was possible, only that it doesn’t match what others mean when they hear the phrase “progressive enhancement”. Your challenge is explaining yourself to the rest of the world, not to me. I’m merely pointing out the size of the task you face when using language is idiosyncratic ways (as you do here).

  10. Marcus
    Posted September 18, 2013 at 3:43 am | Permalink

    Thanks for your response Alex and I do understand your point.

    Unfortunately, the lack of understanding of this and related details causes websites to be built amateurishly and causes end-users unnecessary problems.

    I think evangelists have a responsibility to educate/spread the word on such matters.

One Trackback

  1. […] have been some other summary posts trying to provide their own insight. Here’s […]