That PE Thang

The interwebs, they are aboil!

People worthy of respect are publicly shaming those who don’t toe the Progressive Enhancement line. Heroes of the revolution have taken to JavaScript’s defense. The rebuttals are just as compelling.

Before we all nominate as Shark or Jet, I’d like to take this opportunity to point out the common ground: both the “JS required” (“requiredJS”?) and “PE or Bust” crews implicitly agree that HTML+CSS isn’t cutting it.

PE acknowledges this by treating HTML as scaffold to build the real meaning on top of. RequiredJS brings its own scaffolding. Apart from that, the approaches are largely indistinguishable.

In ways large and small, the declarative duo have let down application developers. It’s not possible to stretch the limited vocabularies they provide to cover enough of the relationships between data, input, and desired end states that define application construction. You can argue that they never were, but I don’t think that’s essentially true. Progressive Enhancement, for a time, might have gotten you everywhere you wanted to be. What’s different now is that more of us want to be places that HTML+CSS are unwilling or incapable of taking us.

The HTML data model is a shambles, the relationship between data, view, and controller (and yes, browsers have all of those things for every input element) is opaque to the point of shamanism: if I poke this attribute this way, it does a thing! Do not ask why, for there is no why. At the macro scale, HTML isn’t growing any of the “this is related to that, and here’s how” that the humble <a href="..."> provided; particularly not about how our data relates to its display. Why, in 2013, isn’t there a (mostly) declarative way to populate a table from JSON? Or CSV? Or XML? It’s just not on the agenda.

And CSS…it’s so painful and sad that mentioning it feels like rubbing salt in a wound that just refuses to heal.

Into these gaps, both PE and requiredJS inject large amounts of JS, filling the yawning chasm of capability with…well…something. It can be done poorly. And it can be done well. But for most sites, widgets, apps, and services the reality is that it must be done.

Despite it all, I’m an optimist (at least about this) because I see a path that explains our past success with declarative forms and provides a way for them to recapture some of their shine.

Today, the gap between “what the built-in-stuff can do” and “what I need to make my app go” is so vast at the high-end that it’s entirely reasonable for folks like Tom to simply part ways with the built-ins. If your app is made of things that HTML doesn’t have, why bother? At the more-content-than-app end, we’re still missing ways to markup things that microformats and schema.org have given authors for years: places, people, events, products, organizations, etc. But HTML can still be stretched very nearly that far, so long as the rest of the document is something that HTML “can do”.

What’s missing here is a process for evolving HTML more quickly in response to evidence that it’s missing essential features. To the extent that I cringe at today’s requiredJS sites and apps, it’s not because things don’t work with JS turned off (honestly, I don’t care…JS is the very lowest level of the platform…of course turning it off would break things), but because stuffing the entire definition of an application into a giant JS string deprives our ecosystem of evidence that markup could do it. It’s not hard to imagine declarative forms for a lot of what’s in Ember. Sure, you’ll plumb quite a bit through JS when something isn’t built into the framework or easily configured, but that’s no different than where the rest of the web is.

Web Components are poised to bridge this gap. No, they don’t “work” when JS is disabled, but it’s still possible to hang styles off of declarative forms that are part of a document up-front. Indeed, they’re the ultimate in progressive enhancement.

I’d be lying if I were to claim that bringing the sexy back to markup wasn’t part of the plan for Web Components the whole time. Dimitri’s “you’re crazy” look still haunts me when I recall outlining the vision for bringing peace to the HTML vs. JS rift by explaining how HTML parsing works in in terms of JS. It has been a dream of mine that our tools would uncover (enabling understanding of) what’s so commonly needed that HTML should include it in the next iteration. In short, to enable direct evolution. To do science, not fumbling alchemy.

It’s key to understand is that “PE or Bust” vs. “requiredJS” isn’t a battle anyone can win today. The platform needs to give us a way to express ourselves even when it hasn’t been prescient of our needs — which, of course, it can’t be all the time. Until now, there hasn’t been that outgas, so of course we reach out and use the turing-complete language at our finger tips to go re-build what we must to get what we want.

The developer economics will be stacked that way until Web Components are the norm. Call it PE++ (or something less clunky, if you must), but the war is about to be over. PE vs. requiredJS will simply cease to be an interesting discussion.

Can’t wait for that day.

The Phony Balance Benchmark

There’s a palpable tension in my shoulders as I tap this out — I know already that this post will create cringe-worthy responses and name calling and all the rest. But on we plod.

A friend called out to me a peculiar feature of a conference Program Committee they were serving on: that it was part of the PC’s role to keep a look out for strong minority/female speakers and encourage them to submit to the open CFP.

Soooooooo much has been written on these points, but in my (biased) view Frances covers it well:

Discrimination is a problem. I, personally, don’t give a monkey’s how many women or whoever are in our industry, as long as everyone who wanted to be here could and had free opportunity to do so, but sadly that is not the case and as such our community is not representative of all those that could be here if discrimination, from stereotyping roles to outright sexism/racism/agism/*ism, was not present. As such, we have a duty to address the problems that disable people’s opportunities.

I found myself reflecting on the PC’s I’ve served on over the years and the many styles they’ve embodied. There’s a particular style to the O’Reilly-run conferences that is distinctive, largely for the scale involved. OSCON is hundreds of talks across dozens of rooms. Velocity EU isn’t that much smaller. The level of curation that each PC applies is also hugely variable: Steve Souders is incredibly hands-on and detail oriented whereas I have no idea who actually heads up the OSCON PC any more. It doesn’t seem to functionally matter. But in both cases, despite the huge differences in style and approach, the box into which ORA puts the PCs creates the illusion of responsibility for a balance which, as Frances accurately notes, isn’t a particularly local concern.

The pressure itself is unmistakeable. The overt urges to over-select for some trait, the conversations that happening among PC members or in comments in the review tools…as long as it’s not anonymized feedback against anonymized submissions (my favorite kind), the cultural need to be seen to be “doing something” at a point far, far removed from any actual leverage is nearly overwhelming. And the risks to “doing something” are enormous. Nobody at a tech conference wants to be a token of anything other than sparkling technical achievement.

The risks to conference organizers run counter, however: we’ve seen over and over again that the angry mob demonstrates little faculty with math. And that mob can sink a conference. The mob’s leaders don’t appear to acknowledge that some populations are structurally under-represented in computing (conflating it with generic “STEM” representation levels, even after correction), or that being the case, that bludgeoning PCs and organizers into over-representation may present as many pitfalls as positives.

What to do?

The conclusion that smacked me upside the head today is that conference organizers must nip this in the bud: demonstrate action at the root cause to diffuse the tension that would otherwise bubble into inappropriate selection pressure and math-challenged “advocacy”. The solution is the same in both cases: credibly commit to donating a % of revenue (not profit) to efforts that teach more girls and other under-represented minorities how to be engineers.

We can’t retroactively fix the selection pressures that got us to the current terrible state. But we can clear the path for the next generation, and we can set a better example of being decent humans each other while our overly-white, overly-male generation of engineers recedes into anecdote. Conferences that commit, audit-ably and credibly to doing this, along with setting codes of conduct and taking their responsibilities seriously in other regards, are doing the only things that can be truly argued to be in the best good for our discipline. We must work to create the conditions of equal opportunity in our field, not merely affect the outward appearance of a discipline that has it.

It’s time to de-fang the phony arguments about “balance”, re-direct our focus to the areas that can have more effect, and judge the results by the wellbeing of our peers and those who seek to join those ranks.

Update: I elided any discussion of effectiveness of the organizations which I’m suggesting conferences should be supporting. That was intentional. There’s quite a lot of variability in effectiveness and, in most cases, not even much measurement. The process of bringing interested youngsters into CS is fraught with hurdles, and the right mental model is a “sales funnel” in which we lose potential conversions at many steps. Equal opportunity must be present at each step in the funnel for it to be truly achieved. I’d hope that conferences and others supporting the cause of equality in technology take a data-driven view to how to best deploy their support. But the failures of most charitable organizations to characterize their results is a long digression for another time.

For Jo

This was originally drafted as response to [Jo Rabin’s blog post] discussing a meetup the W3C TAG hosted last month. For some reason, I was having difficulty adding comments there.

Hi Jo,

Thanks for the thoughtful commentary, and for the engaging chat at the meetup. Your post mirrors some of my own thinking about what the TAG can be good for.

I can’t speak for everyone on the TAG, but like me, most of the new folks who have joined have backgrounds as web developers. For the last several months, we’ve been clearing away old business from the agenda, explicitly to make way for new areas of work which mirror some of your ideas. At the meeting which the meetup was attached to, the TAG decided at the urging of the new members to become a functional API review board. The goal of that project is to encourage good layering practice across the platform and to help WGs specify better, more coherent, more idiomatic APIs. That’s a long-game thing to do, and you can see how far we’ve got to go in terms of making even the simplest idioms universal.

Repairing these sorts of issues is what the TAG, organizationally, is suited to do. Admittedly, it has not traditionally shown much urgency or facility with them. We’re changing that, but it takes some time. Hopefully you’ll see much more to cheer in the near future.

As for the overall constituency and product, I feel strongly that one of the things we’ve accomplished in our efforts to reform the TAG is that we’re re-focusing the work to emphasize the ACTUAL web. Stuff that’s addressable with URLs and has a meaningful integration point with HTML. Too much time has been wasted worrying about problems we don’t have, or for good reasons, are unlikely to have. Again, I don’t speak for the TAG, but I promise to continue to fight for the pressing problems of webdevs.

The TAG can use this year to set an agenda, show positive progress, and deliver real changes in specs. Already we’re making progress with Promises, constructability, layering (how do the bits relate?), and extensibility. We also have a task to explain what’s important and why. That’s what has lead to efforts like the Extensible Web Manifesto. You’ll note other TAG members as signatories.

Along those lines, the TAG has also agreed to begin work on documents that will help spec authors understand how to approach the design process with the constraints of idiomaticness and layering in mind. That will take time, but it’s being informed by our direct, hands-on work with spec authors and WGs today.

So the lines are drawn: the TAG is refocusing, taking up the architectural issues that cause real people real harm in the web we actually have, and those who think we ought to be minding some other store aren’t very much going to like it. I’m OK with that, and I hope to have your support in making it happen.

Regards

Why JavaScript?

One strain of objection I often hear about the project of making the web more extensible is that it implies travelling further down the JavaScript rabbit hole. The arguments often include:

  • No other successful platform is so limited to a single language (in semantics if not syntax).
  • Better languages exist and, surely, could be hooked up to the HTML DOM instead.
  • JavaScript can’t really describe everything the web platform does, so it’s not the right tool for this archeological excavation.

These, incidentally, are mirrors to the fears that many have about the web becoming “too reliant” on JavaScript. But that’s a topic for another post.

Lets examine these in turn.

The question of what languages a platform admits as first-class isn’t about the languages — not really, anyway. It’s about the conventions of the lowest observable level of abstraction. We have many languages today that cooperate at runtime on “classical” platforms (Windows/Linux/OSX) and the JVM because they collaborate on low-level machine operations. In the C-ish OSes, that’s about moving words around memory area and using particular calling conventions for structuring input and outputs to kernel API thunks. Above that it’s all convention; see COM. Similarly, JVM languages interop at the level of JVM bytecode.

The operational semantics of these platforms are incredibly low level. The flagship languages and most of the runtime behavior of programs are built up from these very low-level contracts. Where interop happens at an API level, it’s usually about a large-ish standard library which obeys most of the same calling conventions (even if its implementation is radically different).

The web has it the other way around. It achieved broad compatibility by starting the bidding at extremely high level semantics which, initially, had very little in the way of a contract beyond bugwards compatibility with whatever Netscape or MSFT shipped last. The coarse, interpret-it-as-you-go contract of HTML is one of the things that has made it such a hardy survivor. JavaScript was added later, and while it has lower-level operational semantics than HTML or CSS, that history of bolting JS on later has led to the current project of encouraging extensibility and layering; e.g., through Web Components. It’s also why those who cargo-cult their experiences of other platforms onto the web find themselves adrift. There just isn’t a shared lower level on which to interoperate.

That there aren’t other languages interfacing with the web successfully today is, in part, the natural outcome of a lack of shared lower-level idioms on which those languages could build-up runtimes on. It’s no accident that CoffeeScript, TypeScript, and even Dart find themselves running mostly on top of JS VMs. There’s no lower level in the platform to contemplate.

Which brings us to the second argument: there are other, better languages…surely we could just all agree on some bytecode format for the web that would allow everyone to get along…right?

This is possible, but implausible.

Implausibility is the only reason I pour time and effort into trying to improve JS and not something else. The Nash Equilibrium of the web gives rise to predicable plays: assuming that incentives for adopting low-level descriptions of JS (as any such bytecode would have to describe JS as well as everything else) are not evenly distributed, movement by any group that is not all of the competitors stymies compatibility, which after all is the whole goal. Any language that wishes to interoperate with JavaScript and the existing DOM is best off describing its runtime in terms of JavaScript for fear that the threat to not adopting a compatible bytecode is credible. Compatibility strategies that straddle the fence can work, but it’s not a short (or clear) game to play. And introducing an abstraction that’s not fundamentally lower-level than JS (and/or does not fully subsume its semantics) is simply doomed. It would lack the power to even credibly hold out hope for a compatible future.

So, yes, there are better languages. Yes, you could put them in a browser. But unless you possess the power to put them in every browser, they don’t matter unless their operational semantics are 1:1 with JavaScript.

You can see how I ended up on TC39. It’s not that I think JS is great (it has well-documented flashes of genius, but so does any competitor worth mentioning) or even the perfect language for the web. But it is the *one language that every vendor is committed to shipping compatibly*. Evolving JS has the leverage to add/change the semantics of the platform in a way that no other strategy credibly can, IMO.

This leaves us with the last objection: JS doesn’t fully describe everything in the web platform, so why not recant and switch horses before it’s too late to turn back?

This misreads platforms vs. runtimes. All successful platform have privileged APIs and behaviors. Successful, generative platforms merely reduce the surface area of this magic and ensure that privileged APIs “blend in” well — no funky calling conventions, no alien semantics, etc. Truly great platforms leave developers thinking they’re the only ship in the entire ocean and that it is a uniform depth the whole way across. It’s hard to think of a description any more at odds with the web platform. Having acknowledged the necessity and ubiquity of privileged APIs, the framing is now right to ask: what can be done about it?

I’ve made it my work for the past 3+ years — along with a growing troupe of fellow thinkers — to answer this charge by reducing the scope and necessity of magic in everyday web development. To describe how something high-level in the platform works in terms of JS isn’t to deny some other language a fair shot or to stretch too far with JS, it’s simply to fill in the obvious gaps by asking the question “how are these bits connected?”

Those connections and that archeological dig are what are most likely to turn up the sort of extensible, layered, compatible web platform that shares core semantics across languages. You can imagine other ways of doing it, but I don’t think you can get there from here. And the possible is all that matters.