Why JavaScript?

One strain of objection I often hear about the project of making the web more extensible is that it implies travelling further down the JavaScript rabbit hole. The arguments often include:

  • No other successful platform is so limited to a single language (in semantics if not syntax).
  • Better languages exist and, surely, could be hooked up to the HTML DOM instead.
  • JavaScript can’t really describe everything the web platform does, so it’s not the right tool for this archeological excavation.

These, incidentally, are mirrors to the fears that many have about the web becoming “too reliant” on JavaScript. But that’s a topic for another post.

Lets examine these in turn.

The question of what languages a platform admits as first-class isn’t about the languages — not really, anyway. It’s about the conventions of the lowest observable level of abstraction. We have many languages today that cooperate at runtime on “classical” platforms (Windows/Linux/OSX) and the JVM because they collaborate on low-level machine operations. In the C-ish OSes, that’s about moving words around memory area and using particular calling conventions for structuring input and outputs to kernel API thunks. Above that it’s all convention; see COM. Similarly, JVM languages interop at the level of JVM bytecode.

The operational semantics of these platforms are incredibly low level. The flagship languages and most of the runtime behavior of programs are built up from these very low-level contracts. Where interop happens at an API level, it’s usually about a large-ish standard library which obeys most of the same calling conventions (even if its implementation is radically different).

The web has it the other way around. It achieved broad compatibility by starting the bidding at extremely high level semantics which, initially, had very little in the way of a contract beyond bugwards compatibility with whatever Netscape or MSFT shipped last. The coarse, interpret-it-as-you-go contract of HTML is one of the things that has made it such a hardy survivor. JavaScript was added later, and while it has lower-level operational semantics than HTML or CSS, that history of bolting JS on later has led to the current project of encouraging extensibility and layering; e.g., through Web Components. It’s also why those who cargo-cult their experiences of other platforms onto the web find themselves adrift. There just isn’t a shared lower level on which to interoperate.

That there aren’t other languages interfacing with the web successfully today is, in part, the natural outcome of a lack of shared lower-level idioms on which those languages could build-up runtimes on. It’s no accident that CoffeeScript, TypeScript, and even Dart find themselves running mostly on top of JS VMs. There’s no lower level in the platform to contemplate.

Which brings us to the second argument: there are other, better languages…surely we could just all agree on some bytecode format for the web that would allow everyone to get along…right?

This is possible, but implausible.

Implausibility is the only reason I pour time and effort into trying to improve JS and not something else. The Nash Equilibrium of the web gives rise to predicable plays: assuming that incentives for adopting low-level descriptions of JS (as any such bytecode would have to describe JS as well as everything else) are not evenly distributed, movement by any group that is not all of the competitors stymies compatibility, which after all is the whole goal. Any language that wishes to interoperate with JavaScript and the existing DOM is best off describing its runtime in terms of JavaScript for fear that the threat to not adopting a compatible bytecode is credible. Compatibility strategies that straddle the fence can work, but it’s not a short (or clear) game to play. And introducing an abstraction that’s not fundamentally lower-level than JS (and/or does not fully subsume its semantics) is simply doomed. It would lack the power to even credibly hold out hope for a compatible future.

So, yes, there are better languages. Yes, you could put them in a browser. But unless you possess the power to put them in every browser, they don’t matter unless their operational semantics are 1:1 with JavaScript.

You can see how I ended up on TC39. It’s not that I think JS is great (it has well-documented flashes of genius, but so does any competitor worth mentioning) or even the perfect language for the web. But it is the *one language that every vendor is committed to shipping compatibly*. Evolving JS has the leverage to add/change the semantics of the platform in a way that no other strategy credibly can, IMO.

This leaves us with the last objection: JS doesn’t fully describe everything in the web platform, so why not recant and switch horses before it’s too late to turn back?

This misreads platforms vs. runtimes. All successful platform have privileged APIs and behaviors. Successful, generative platforms merely reduce the surface area of this magic and ensure that privileged APIs “blend in” well — no funky calling conventions, no alien semantics, etc. Truly great platforms leave developers thinking they’re the only ship in the entire ocean and that it is a uniform depth the whole way across. It’s hard to think of a description any more at odds with the web platform. Having acknowledged the necessity and ubiquity of privileged APIs, the framing is now right to ask: what can be done about it?

I’ve made it my work for the past 3+ years — along with a growing troupe of fellow thinkers — to answer this charge by reducing the scope and necessity of magic in everyday web development. To describe how something high-level in the platform works in terms of JS isn’t to deny some other language a fair shot or to stretch too far with JS, it’s simply to fill in the obvious gaps by asking the question “how are these bits connected?”

Those connections and that archeological dig are what are most likely to turn up the sort of extensible, layered, compatible web platform that shares core semantics across languages. You can imagine other ways of doing it, but I don’t think you can get there from here. And the possible is all that matters.

That Old-Skool Smell, Part 2

The last post covered a few of the ways that the W3C isn’t effective facilitating the discussions that lead to new standards work and, more generally, how trying to participate feels as though you are being transported back to a slower, more mediated era.

Which brings up a couple of things I’ve noticed across the W3C and which can likely be fixed more quickly. But some background first: due to W3C rules, it’s hard to schedule meetings (usually conference calls) quickly. You often need 2 weeks notice for it to happen under a W3C-condoned WG, but canceling meetings is, as we all know, much easier. As a result, many groups set up weekly or bi-weekly meetings but, in practice, meet much less frequently. This lightens the burden for those participating heavily in one or two topics, but leaves occasional participants and those trying to engage from non-majority time-zones at a serious dis-advantage because the notice of meeting cancellation is near-universally handled via mailing list messages.

Yes, you read that right, the W3C uses mailing lists to manage meeting notices. In 2013. And there is no uniformity across groups.

Thanks to Peter Linss, the TAG is doing better: there’s an ical feed for all of our upcoming meetings that anyone can subscribe to. Yes, notices are still sent to the list, but you no longer need to dig through email to attempt to find out if the regularly-scheduled meeting is going to happen. Wonder of wonders, I can just look at my calendar…at least when it comes to the TAG.

That this is new says, to my mind, everything you need to know about how the current structure of the W3C’s spending on technical infrastructure and staff has gone unchallenged for far, far too long. The TAG is likewise starting to make a move from CVS to Git…and once again it finds itself at the vanguard of organizational practice. That here has been no organization-wide attempt to get WGs to move to more productive tools is, to me, an indicator of how many in positions of authority (if not power) at the WG and on the Staff think things are going. That this state of affairs isn’t prima-facia evidence of the need for urgent change and modernization says volumes. As usual, it’s not about the tools, but about the way the tools help the organization meet (or fail to meet) its goals. Right now, “better” looks like what nearly every member organization’s software teams are already doing. Modernizing in this environment will be a relief, not a burden.

It’s also sort of shocking to find that there are no dashboards. Anywhere. For anything — at least not ones that I can find.

No progress or status dashboard to give the organization a sense for what’s currently happening, no dashboard to show charter and publication milestones across groups, no visible indicators about which groups are highly active and which are fading away.

If the W3C has an optics problem — and I submit that it does — it’s not doing itself any favors by burying the evidence of its overall trajectory and in arcane mailing lists.

There is, at base, a question raised by this and many other aspects of W3C practice: how can the organization be seen to be a good steward of member time, attention, and resources when it does not seem to pay much mind to the state of the workshop. I’d be delighted to see W3C staff liasons for WGs working to make products visible, easy to engage with, and efficient to contribute to as their primary objective. As it is, I don’t sense that’s their role. And that’s just not great customer service.. I hope I’m wrong, or I hope that changes.

That Old-Skool Smell

One of the things that the various (grumpy) posts covering the recent W3C TAG / webdev meetup here in London last month brought back to mind for me was a conversation that happened in the TAG meeting about the ways that the W3C can (or can’t) facilitate discussion between webdevs, browser vendors, and “standards people”.

The way the W3C has usually done this is via workshops. Here’s an examplar from last year. The “how to participate” link is particularly telling:

Position papers are required to be eligible to participate in this workshop. Organizations or individuals wishing to attend must submit a position paper explaining their perspectives on a workshop topic of their choice no later than 01 July 2013. Participants should have an active interest in the area selected, ensuring other workshop attendees will benefit from the topic and their presence.

Position papers should:

  • Explain the participant’s perspective on the topic of the Workshop
  • Explain their viewpoint
  • Include concrete examples of their suggestions

Refer to the position papers submitted for a similar W3C workshop to see what a position paper generally implies.

It is necessary to submit a position paper for review by the Program Committee. If your position paper is selected by the Program Committee, you will receive a workshop invitation and registration link. Please see Section “Important dates” for paper submission and registration deadlines.

ZOMGWTFBBQ. If the idea is that the W3C should be a salon for academic debate, this process fits well. If, on the other hand, the workshop is meant to create sort of “interested stakeholders collaborating on a hard problem” environment that, e.g., Andrew Betts from FT Labs and other have helped to create around the offline problem (blog post on that shortly, I promise), this might be exactly the wrong way to do it.

But it’s easy to see how you get to this sort of scary-sounding process: to keep gawkers from gumming up the works it’s necessary to create a (low) barrier to entry. Preferably one that looks higher than it really is. Else, the thinking goes, the event will devolve into yet-another-tech-meetup; draining the discussions of the urgency and focus that only arise when people invested in a problem are able to discus it deeply without distraction. The position paper and selection process might fill the void — particularly if you don’t trust yourself enough to know who the “right people” to have in the room might be. Or perhaps you have substantial research funding and want academic participants to feel at home; after all, this is the sort of process that’s entirely natural in the research setting. Or it could be simple momentum: this is the way the W3C has always attempted to facilitiate and nobody has said “it’s not working” loudly enough to get anything to change.

So let me, then, be the first: it’s not working.

Time, money, and effort is being wasted. The workshop model, as currently formulated, is tone-deaf. It rarely gets the right people in the room.

Replacements for this model will suffer many criticisms: you could easily claim that the FT and Google-hosted offline meetings weren’t “open”. Fair. But they have produced results, much the way side-line and hallway-track meetings about other topics have similarly been productive in other areas.

The best model the W3C has deployed thus far has been the un-conference model used at TPAC ’11 and ’12, due largely to the involvement of Tantek Çelik. That has worked because many of the “right people” are already there, although, in many cases, not enough. And it’s worth saying that this has usually been an order-of-magnitude less productive than the private meetings I’ve been a part of at FT, Mozilla, Google, and other places. Those meetings have been convened by invested community members trying to find solutions, and they have been organized around explicit invites. It’s the proverbial smoke-filled room, except nobody smokes (at least in the room), nobody wears suits, and there’s no formal agenda. Just people working hard to catalog problems and design solutions in a small group of people who represent broader interests…and it works.

The W3C, as an organization, needs to be relevant to the concerns of web developers and the browser vendors who deliver solutions to their problems, and that to do that it must speak their language. Time for the academic patina to pass into history. The W3C’s one and only distinguishing characteristic is that some people still believe that it can be a good facilitator for evolving the real, actual, valuable web. Workshops aren’t working and need to be replaced with something better. Either the W3C can do that or we will continue to do it “out here”, and I don’t think anyone really wants that.

Update: A couple of insightful comments via twitter:

Sylvain nails one of the big disconnects for me: it’s not about format, it’s about who is “convening” the discussion. Andrew Betts has done an amazing job inviting the right people, and in the unconference style format, you need a strong moderator to help pull out the wheat from the chaff. In both cases, we’ve got examples where “local knowledge” of the people and the problems is the key to making gatherings productive. And the W3C process doesn’t start with that assumption.

Next:

I think this is right. A broad scope tends to lead to these sorts of big workshop things that could cover lots of ground…but often don’t lead to much. This is another axis to judge the workshop format on, and I’m not sure I could tell you what the hoped-for outcomes of workshops are that matter to devs, implementers, and the standards process. I’d like to hear from W3C AC reps and staff who think it is working, though.

Thoughts On A Job Done

Most of the time, when a bit of software you work on floats out of your life and into the collective past, there’s a sense of mourning. But that’s not how I feel about Chrome Frame.

The anodyne official blog post noting the retirement six months hence isn’t the end of something good, it’s the acknowledgement that we’re where we wanted to be. Maybe not all of us, but enough to credibly say that the tide has turned. The trend lines are more than hopeful, and in 6 months any lingering controversy over this looks like it’ll be moot. Windows XP is dying, IE 6 & 7 are echoes of their former menace, and IE 8 is finally going the same way. Most of the world’s users are now at the front of the pack where new browser releases are being delivered without friction thanks to auto-update. The evergreen bit of the web is expanding, and the whole platform is now improving as a result. This is the world we hoped to enable when Chrome Frame was first taking form.

I joined Google in December ’08 expressly because it’s the sort of company that could do something like Chrome Frame and not screw it up by making it an attention hogging nuisance of a toolbar or a trojan-horse for some other, more “on brand” product. GCF has never been that, not because that instinct is somehow missing from people that make it through the hiring process; no, GCF has always been a loss-maker and a poor brand ambassador because management accepted that what is good for the web is good for Google. Acting in that long-term interest is just the next logical step.

Truth be told, we weren’t even the right folks for the job. We were just the only ones both willing and able to do it. MSFT has the freaking source code for IE and Windows. There was always grim joking on the team that they could have put GCF together in a weekend, whereas it took us more than a year and change to make it truly stable. Honestly, if I thought MSFT was the sort of place that would have done something purely good for the web like GCF, I probably would have applied there instead. But in ’08, the odds of that looked slim.

Having run the idea for something like Chrome Frame past one of the core IE team engineers at MIX that year, the response I got was “oh, you’re some kind of a dreamer…a visionary”. I automatically associate the word “visionary” with “time-wasting wanker”, so that was 0 for 2 on the positive adjective front. And discussions with others were roughly on par. Worse, when the IE team did want to enable a cleaner break with legacy via the X-UA-Compatible flag, the web standards community flipped out in a bout of mind-blowing shortsightedness…and MSFT capitulated. Score 1 for standards, -1 for progress.

For me, personally, this has never been about browsers and vendors and all the politics wrapped up in those words: it has been about making the web platform better. To do that means reckoning with the problem from the web-developer perspective: any single vendor only ships a part of the platform that webdevs perceive.

Web developers don’t view a single browser, or a single version of a browser that’s on hundreds of millions of devices as a platform. Compared to the limited reach of “native” platforms, it seems head-scratching at first, but the promise of the web has always been universal access to content, and web developers view the full set of browsers that make up the majority of use as their platform. That virtue that both makes the web the survivable, accessible, universal platform that can’t be replaced as well as the frustrating, slow, uneven development experience that so many complain about.

The only way to ensure that web developers see the platform improving is to make sure that the trailing edge is moving forward as fast as the leading edge. The oldest cars and power plants are exponentially worse polluters than the most modern ones; if you want to do the most for the world, get the clunkers off the road and put scrubbers on those power plants. Getting clunkers off the road is what upgrade campaigns are, and GCF has been a scrubber.

All power plants, no matter how well scrubbed, must eventually be retired. The trend is now clear: the job coming to a close. Most of the world’s desktop users are now on evergreen browsers, and with the final death of WindowsXP in sight, the rest are on the way out. Webdevs no longer face a single continuous slope of pain. We can consider legacy browsers as the sort of thing we should be building fallback experiences for, not first-class experiences. The goal of making content universally accessible doesn’t require serving the exact same experience to everyone. That’s what has always made the web great, and now’s the time for non-evergreen browsers to take their place in the fallback bucket, no longer looming large as our biggest collective worry.

I’m proud to be a small part of the team that made Chrome Frame happen, and I’m grateful to Google for having given me the chance to do something truly good for the web.