Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Hoisted From The Comments

Some stuff is too good to leave in the shadows. On my Bedrock post, James Hatfield writes in with a chilling point, but one which I've been making for a long while:

”every year we’re throwing more and more JS on top of the web”

The way things are going in my world, we are looking at replacing the web with JS, not simply layering. At a certain point you look at it all and say “why bother”. Browsers render the DOM not markup. They parse markup. Just cut out the middle man and send out DOM – in the form of JS constructs.

The second part is to stop generating this markup which then must be parsed on a server at the other end of a slow and high latency prone communication channel. Instead send small compact instructions to the browser/client that tells it how to render and when to render. Later you send over the data, when it’s needed...

This is a clear distillation of what scares me about the road we're headed down because for each layer you throw out and decide to re-build in JS, you end up only doing what you must, and that's often a deadline-driven must. Accessibility went to hell? Latency isn't great? Doesn't work at all without script? Can't be searched? When you use built-ins, those things are more-or-less taken care of. When we make them optional by seizing the reigns with script, not only do we wind up playing them off against each other (which matters more, a11y or latency?) we often find that developers ignore the bits that aren't flashy. Think a11y on the web isn't great now? Just wait 'till it's all JS driven.

It doesn't have to be this way. When we get Model Driven Views into the browser we'll have the powerful "just send data and template it on the client side" system everyone's looking for but without threatening the searchability, a11y, and fallback behaviors that make the web so great. And this is indicative of a particularly strong property of markup: it's about relationships. "This thing references that thing over there and does something with it" is hard for a search engine to tease out if it's hidden in code, but if you put it in markup, well, you've got a future web that continues to be great for users, the current crop of developers, and whoever builds and uses systems constructed on top of it all later. That last group, BTW, is you if you use a search engine.

But it wasn't all clarity and light in the comments. Austin Cheney commented on the last post to say:

This article seems to misunderstand the intention of these technologies. HTML is a data structure and nothing more. JavaScript is an interpreted language whose interpreter is supplied with many of the most common HTML parsers. That is as deep as that relationship goes and has little or nothing to do with DOM.

...It would be safe to say that DOM was created because of JavaScript, but standard DOM has little or nothing to do with JavaScript explicitly. Since the release of standard DOM it can be said that DOM is the primary means by which XML/HTML is parsed suggesting an intention to serve as a parse model more than a JavaScript helper.

Types in DOM have little or nothing to do with types in JavaScript. There is absolutely no relationship here and there shouldn’t be...You cannot claim to understand the design intentions around DOM without experience working on either parsers or schema language design, but its operation and design have little or nothing to do with JavaScript. JavaScript is just an interconnecting technology like Java and this is specifically addressed in the specification in Appendix G and H respectively.

And, after I tried to make the case that noting how it is today is no replacement for a vision for how it should be, Austin responds:

The problem with today’s web is that it is so focused on empowering the people that it is forgetting the technology along the way. One school of thought suggests the people would be better empowered if their world were less abstract, cost radically less to build and maintain, and is generally more expressive. One way to achieve such objectives is alter where costs exist in the current software life cycle of the web. If, for instance, the majority of costs were moved from maintenance to initial build then it could be argued that more time is spent being creative instead of maintaining.

I have found that when working in HTML alone that I save incredible amounts of time when I develop only in XHTML 1.1, because the browser tells you where your errors are. ... Challenges are removed and costs are reduced by pushing the largest cost challenges to the front of development.

... The typical wisdom is that people need to be empowered. If you would not think this way in your strategic software planning then why would it make sense to think this way about strategic application of web technologies? ...

This might all sound very rational on one level, but a few points need to be made:

I think Austin's point about moving costs from maintenance to build is supposed to suggest that if we were only more strict about things, we'd have less expensive maintenance of systems, but it's not clear to me that this has anything to do with strictness. My observation from building systems is that this has a lot more to do with being able to build modular, isolated systems that compose well. Combine that with systems that let you iterate fast, and you can grow very large things that can evolve in response to user needs without turning into spaghetti quite so quickly. Yes, the web isn't great for that today, but strictness is orthogonal. Nothing about Web Components demands strictness to make maintainability infinitely better.

And the last point isn't news. Postel's Law isn't a plea about what you, dear software designer, should be doing, it's an insightful clue into the economics of systems at scale. XML tried being strict and it didn't work. Not even for RSS. Mark Pilgrim's famously heroic attempts at building a reliable feed parser match the war stories I've heard out of the builders of every large RSS system I've ever talked to. It's not that it's a nice idea to be forgiving about what you accept, it's that there's no way around it if you want scale. What Austin has done is the classic bait-and-switch: he has rhetorically substituted what works in his organization (and works well!) for what's good for the whole world, or even what's plausible. I see this common logical error in many a standards adherent/advocate. They imagine some world in which it's possible to be strict about what you accept. I think that world might be possible, but the population would need to be less than the size of a small city. Such a population would never have ever created any of the technology we have, and real-world laws would be how we'd adjudicate disputes. As soon as your systems and contracts deal with orders of magnitude more people, it pays to be reliable. You'll win if you do and lose if you don't. It's inescapable. So lets banish this sort of small-town thinking to the mental backwaters where it belongs and get on with building things for everyone. After all, this is about people. Helping sentient beings achieve their goals in ways that are both plausible and effective.

If helping people is not what this is about, I want out.

For Dave and David

Dave Herman jokingly accused me a couple of TC39 meetings ago of being an "advocate for JavaScript as we have it today", and while he meant it in jest, I guess to an extent it's true -- I'm certainly not interested in solutions to problems I can't observe in the wild. That tends to scope my thinking aggressively towards solutions that look like they'll have good adoption characteristics. Fix things that are broken for real people in ways they can understand how to use.

This is why I get so exercised about WebIDL and the way it breaks the mental model of JS's "it's just extensible objects and callable functions". It's also why my discussions with folks at last year's TPAC were so bleakly depressing. I've been meaning to write about TPAC ever since it happened, but the time and context never presented themselves. Now that I got some of my words out about layering in the platform, the time seems right.

Let me start by trying to present the argument I heard from multiple sources, most likely from (in my feeble memory) Anne van Kestern Jonas Sicking(?):

ECMAScript is not fully self-describing. Chapter 8 drives a hole right through the semantics, allowing host objects to whatever they want and more to the point, there's no way in JS to describe e.g. list accessor semantics. You can't subclass an Array in JS meaningfully. JS doesn't follow it's own rules, so why should we? DOM is just host objects and all of DOM, therefore, is Chapter 8 territory.

Brain asploded.

Consider the disconnect: they're not saying "oh, it sure would be nice if our types played better with JS", they're saying "you and what army are gonna make us?" Remember, WebIDL isn't just a shorthand for describing JavaScript classes, it's an entirely parallel type hierarchy.

Many of the Chapter 8 properties and operations are still in the realm of magic from JS today, and we're working to open more of them up over time by giving them API -- in particular I'm hopeful about Allen Wirfs-Brock's work on making array accessors something that we can treat as a protocol -- but it's magic that DOM is appealing to and even specifying itself in terms of. Put this in the back of your brain: DOM's authors have declared that they can and will do magic.

Ok, that's regrettable, but you can sort of understand where it comes from. Browsers are largely C/C++ enterprises and DOM started in most of the successful runtimes as an FFI call from JS to an underlying set of objects which are owned by C/C++. The truth of the document's state was not owned by the JS heap, meaning every API you expose is a conversation with a C++ object, not a call into a fellow JS traveler, and this has profound implications. While we have one type for strings in JS, your C++ side might have bstr, cstring, wstring, std::string, and/or some variant of string16.

JS, likewise, has Number while C++ has char, short int, int, long int, float, double, long double, long long int...you get the idea. If you've got storage, C++ has about 12 names for it. Don't even get me started on Array.

It's natural, then, for DOM to just make up it's own types so long as its raison d'être is to front for C++ and not to be a standard library for JS. Not because it's malicious, but because that's just what one does in C++. Can't count on a particular platform/endianness/compiler/stdlib? Macro that baby into submission. WTF, indeed.

This is the same dynamic that gives rise to the tussle over constructable constructors. To recap, there is no way in JS to create a function which cannot have new on the left-hand-side. Yes, that might return something other than an instance of the function-object on the right-hand side. It might even throw an exception or do something entirely non-sensical, but because function is a JavaScript concept and because all JS classes are just functions, the idea of an unconstructable constructor is entirely alien. It's not that you shouldn't do it...the moment to have an opinion about that particular question never arises in JS. That's not true if you're using magic to front for a C/C++ object graph, though. You can have that moment of introspection, and you can choose to say "no, JS is wrong". And they do, over and over.

What we're witnessing here isn't "right" or "wrong"-ness. It's entirely conflicting world views that wind up in tension because from the perspective of some implementations and all spec authors, the world looks like this:

Not to go all Jeff Foxworthy on you, but if this looks reasonable to you, you might be a browser developer. In this worldview, JS is just a growth protruding from the side of an otherwise healthy platform. But that's not how webdevs think of it. True or not, this is the mental model of someone scripting the browser:

The parser, DOM, and rendering system are browser-provided, but they're just JS libraries in some sense. With <canvas>'s 2D and 3D contexts, we're even punching all the way up to the rendering stack with JS, and it gets ever-more awkward the more our implementations look like the first diagram and not the second.

To get from parser to DOM in the layered world, you have to describe your objects as JS objects. This is the disconnect. Today's spec hackers don't think of their task as the work of describing the imperative bits of the platform in the platform's imperative language. Instead, their mental model (when it includes JS at all) pushes it to the side as a mere consumer in an ecosystem that it is not a coherent part of. No wonder they're unwilling to deploy the magic they hold dear to help get to better platform layering; it's just not something that would ever occur to them.

Luckily, at least on the implementation side, this is changing. Mozilla's work on dom.js is but one of several projects looking to move the source of truth for the rendering system out of the C++ heap and into the JS heap. Practice is moving on. It's time for us to get our ritual lined up with the new reality.

Which brings me too David Flanagan who last fall asked to read my manifesto on how the platform should be structured. Here it is, then:

The network is our bottleneck and markup is our lingua-franca. To deny these facts is to design for failure. Because the network is our bottleneck, there is incredible power in growing the platform to cover our common use cases. To the extent possible, we should attempt to grow the platform through markup first, since markup provides the most value to the largest set of people and provides a coherent way to expose APIs via DOM.

Markup begets JS objects via a parser. DOM, therefore, is merely the largest built-in JS library.

Any place where you cannot draw a line from browser-provided behavior from a tag to the JS API which describes it is magical. The job of Web API designers is first to introduce new power through markup and second to banish magic, replacing it with understanding. There may continue to be things which exist outside of our understanding, but that is a challenge to be met by cataloging and describing them in our language, not an excuse for why we cannot or should not.

The ground below our feet is moving and alignment throughout the platform, while not inevitable, is clearly desirable and absolutely doable in a portable and interoperable way. Time, then, to start making Chapter 8 excuses in the service of being more idiomatic and better layered. Not less and worse.

Older Posts

Newer Posts