Hoisted From The Comments

Some stuff is too good to leave in the shadows. On my Bedrock post, James Hatfield writes in with a chilling point, but one which I’ve been making for a long while:

”every year we’re throwing more and more JS on top of the web”

The way things are going in my world, we are looking at replacing the web with JS, not simply layering. At a certain point you look at it all and say “why bother”. Browsers render the DOM not markup. They parse markup. Just cut out the middle man and send out DOM – in the form of JS constructs.

The second part is to stop generating this markup which then must be parsed on a server at the other end of a slow and high latency prone communication channel. Instead send small compact instructions to the browser/client that tells it how to render and when to render. Later you send over the data, when it’s needed…

This is a clear distillation of what scares me about the road we’re headed down because for each layer you throw out and decide to re-build in JS, you end up only doing what you must, and that’s often a deadline-driven must. Accessibility went to hell? Latency isn’t great? Doesn’t work at all without script? Can’t be searched? When you use built-ins, those things are more-or-less taken care of. When we make them optional by seizing the reigns with script, not only do we wind up playing them off against each other (which matters more, a11y or latency?) we often find that developers ignore the bits that aren’t flashy. Think a11y on the web isn’t great now? Just wait ’till it’s all JS driven.

It doesn’t have to be this way. When we get Model Driven Views into the browser we’ll have the powerful “just send data and template it on the client side” system everyone’s looking for but without threatening the searchability, a11y, and fallback behaviors that make the web so great. And this is indicative of a particularly strong property of markup: it’s about relationships. “This thing references that thing over there and does something with it” is hard for a search engine to tease out if it’s hidden in code, but if you put it in markup, well, you’ve got a future web that continues to be great for users, the current crop of developers, and whoever builds and uses systems constructed on top of it all later. That last group, BTW, is you if you use a search engine.

But it wasn’t all clarity and light in the comments. Austin Cheney commented on the last post to say:

This article seems to misunderstand the intention of these technologies. HTML is a data structure and nothing more. JavaScript is an interpreted language whose interpreter is supplied with many of the most common HTML parsers. That is as deep as that relationship goes and has little or nothing to do with DOM.

…It would be safe to say that DOM was created because of JavaScript, but standard DOM has little or nothing to do with JavaScript explicitly. Since the release of standard DOM it can be said that DOM is the primary means by which XML/HTML is parsed suggesting an intention to serve as a parse model more than a JavaScript helper.

Types in DOM have little or nothing to do with types in JavaScript. There is absolutely no relationship here and there shouldn’t be…You cannot claim to understand the design intentions around DOM without experience working on either parsers or schema language design, but its operation and design have little or nothing to do with JavaScript. JavaScript is just an interconnecting technology like Java and this is specifically addressed in the specification in Appendix G and H respectively.

And, after I tried to make the case that noting how it is today is no replacement for a vision for how it should be, Austin responds:

The problem with today’s web is that it is so focused on empowering the people that it is forgetting the technology along the way. One school of thought suggests the people would be better empowered if their world were less abstract, cost radically less to build and maintain, and is generally more expressive. One way to achieve such objectives is alter where costs exist in the current software life cycle of the web. If, for instance, the majority of costs were moved from maintenance to initial build then it could be argued that more time is spent being creative instead of maintaining.

I have found that when working in HTML alone that I save incredible amounts of time when I develop only in XHTML 1.1, because the browser tells you where your errors are. … Challenges are removed and costs are reduced by pushing the largest cost challenges to the front of development.

… The typical wisdom is that people need to be empowered. If you would not think this way in your strategic software planning then why would it make sense to think this way about strategic application of web technologies? …

This might all sound very rational on one level, but a few points need to be made:

  • If we’re not building technology for people, WTF are we doing with these CPU cycles exactly?
  • Speed of iteration is a key enabler of progress in any technology stack I’ve ever worked in
  • Strictness fails in the wild

I think Austin’s point about moving costs from maintenance to build is supposed to suggest that if we were only more strict about things, we’d have less expensive maintenance of systems, but it’s not clear to me that this has anything to do with strictness. My observation from building systems is that this has a lot more to do with being able to build modular, isolated systems that compose well. Combine that with systems that let you iterate fast, and you can grow very large things that can evolve in response to user needs without turning into spaghetti quite so quickly. Yes, the web isn’t great for that today, but strictness is orthogonal. Nothing about Web Components demands strictness to make maintainability infinitely better.

And the last point isn’t news. Postel’s Law isn’t a plea about what you, dear software designer, should be doing, it’s an insightful clue into the economics of systems at scale. XML tried being strict and it didn’t work. Not even for RSS. Mark Pilgrim’s famously heroic attempts at building a reliable feed parser match the war stories I’ve heard out of the builders of every large RSS system I’ve ever talked to. It’s not that it’s a nice idea to be forgiving about what you accept, it’s that there’s no way around it if you want scale. What Austin has done is the classic bait-and-switch: he has rhetorically substituted what works in his organization (and works well!) for what’s good for the whole world, or even what’s plausible. I see this common logical error in many a standards adherent/advocate. They imagine some world in which it’s possible to be strict about what you accept. I think that world might be possible, but the population would need to be less than the size of a small city. Such a population would never have ever created any of the technology we have, and real-world laws would be how we’d adjudicate disputes. As soon as your systems and contracts deal with orders of magnitude more people, it pays to be reliable. You’ll win if you do and lose if you don’t. It’s inescapable. So lets banish this sort of small-town thinking to the mental backwaters where it belongs and get on with building things for everyone. After all, this is about people. Helping sentient beings achieve their goals in ways that are both plausible and effective.

If helping people is not what this is about, I want out.

14 Comments

  1. Jeremy Snyder
    Posted April 17, 2012 at 11:22 am | Permalink

    Couple of typos or words missing:

    3rd paragraph (supposed to be: if ):
    it’s about relationships. “This thing references that thing over there and does something with it” is hard for a search engine to tease out <> it’s hidden in code, but if you put it in markup

    Second-to-last paragraph (missing word: that)
    Combine that with systems that let you iterate fast, and you can grow very large things <> can evolve in response to user needs without turning into spaghetti quite so quickly.

    Last paragraph (supposed to be: for ):
    So lets banish this sort of small-town thinking to the mental backwaters where it belongs and get on with building things <> everyone.

    Great post, caused me to consider the issue of JS vs markup.. and strict in what you output or input.. and I had to look up a11y (I learned is shorthand for Accessibility). Thought provoking for sure.

  2. Posted April 17, 2012 at 2:20 pm | Permalink

    Thanks Jeremy! Fixed.

  3. Scott Hughes
    Posted April 17, 2012 at 6:28 pm | Permalink

    Alex, where does JSON fit in the argument against draconian error handling?

    Is the proliferation of application/json a counterexample to this argument? Or is it a format in need of displacement? Does JSON.parse need a fault-tolerant parser? Do we need to consider the impact of introducing something like Model Driven Views that encourages use of JSON?

  4. Posted April 18, 2012 at 3:34 am | Permalink

    Great question!

    JSON’s lineage is from programming which is definitionally strict, and as a result, is the purview of a much smaller community of producers than either those who build markup or the designers who use tools. To the extent that the web depends on programming and not markup, the group of people who can produce it will shrink. If you think that’s a good thing, I suggest you take a look at the history of what happens when economies of all sorts begin to shrink. It ain’t pretty.

    But will widespread JSON usage lead to leniency? I think so. Many JSON parsers accept JS-style comments today, over and above the dead body of RFC 4627. I’ve seen leniency in parsing of trailing commas, etc. Many parsers have support un-quoted property names. It just comes with the territory.

    As for MDV’s role in all of this, I’m incredibly hopeful that the common mode of use for MDV will be as progressive enhancement. I.e., you’ll serve your DOM in the page as your “fallback content”, that will be consumed by MDV as the model, and your templates will be what is cached and applied later to generate the UI. MDV does operate on a JS object tree, but that tree can be DOM, and if it’s an HTML DOM, can continue to reap the benefits of reliable parsing. I’m optimistic as a result.

    Regards

  5. Posted April 18, 2012 at 7:22 am | Permalink

    Quick note – I had to look up a11y, so for those who also didn’t know this one:
    http://en.wikipedia.org/wiki/Computer_accessibility

  6. Caleb
    Posted April 21, 2012 at 1:14 am | Permalink

    I agree with the sentiment that technological solutions must ultimately solve human problems. However, the assertion “Strictness fails in the wild” needs qualification. Even Javascript is “strict” compared to the set of all valid English sentences. This strictness makes Javascript a far more useful method for developing web applications than free-form English.

    Strictness enables (relatively) unambiguous expression of a developer’s intentions. As long as (roughly) deterministic behavior is needed in our products and services, some level of strictness will be required.

    Also, I personally find that a certain level of strictness very helpful when it comes to rapid iteration. For instance, I find it far less time consuming to refactor code in a strongly typed language, because the compiler/interpreter will catch a lot of human errors (e.g. passing in a date object into a function that used to accept a formatted string). In dynamically typed languages, such mistakes often aren’t caught until runtime, unless I re-invent a static type system with unit tests.

  7. Posted April 21, 2012 at 6:09 am | Permalink

    Hey Caleb:

    So yes, I didn’t outline the entire spectrum of strictnesses in this post, but you’ve hit on the essential nature of the differences without considering the impact to people other than yourself. I.e., the stricter you are, the fewer people can play. If you don’t think this is the case, I ask you to consider what the anchor tag (<a href="...">) means. There is no strict definition for what it does, only convention. Think beyond its parsing, its very behavior is ambiguious. Clicking on one isn’t guaranteed to do anything in a browser by any particular spec and nobody much cares. As a result of both parsing and behavioral ambiguity, an order of more people can use anchor tags than can write the equivalently functional JS.

    Finally there’s the question of feedback. You’re saying “strictness at development time helps”, but I’d like to push back on this and suggest that what you’re really putting your finger on is feedback. You want to see the effects of what your program will do quickly. Types are a handy way of saying “please give me feedback about this thing in this small scenario”. Optional and gradual typing (currently best understood in the form of Dart) gets you there but still breaks the “you need types at runtime” fallacy. Put another way, your runtime gains little by being strict and in fact can get a lot of leverage out of ignoring strictness and just muddling on.

    So yes, I feel the pain you’re describing, but it’s time to stop asking for strictness and start asking for feedback. One is about saying “no” and the other is about saying “this is what will happen, what do you think about that?”. I know which one I want and the evidence is that systems which give you feedback and not a hassle are the ones that will be better to work and live in.

  8. Caleb
    Posted April 21, 2012 at 9:53 am | Permalink

    Hi Alex,

    We’re certainly in agreement that more strictness narrows the audience, but some measure of strictness is unavoidable. Some level of strictness is required to give a language structure. For instance, people who want to play in the HTML space must be able to read, write, and operate a keyboard. They can’t simply mash out random characters and expect meaningful results. When it comes to HTML, illiterate people can’t play. Would it be nice if illiterate people could play with HTML? Absolutely! Is it possible? I don’t think so.

    Obviously, the appropriate level of structure and strictness is highly dependent on the task at hand. But less strictness is not always useful, or productive. <valleygirl>Like, you know, whatever!</valleygirl> (should I start that sentence with a “” tag for clarity? hmm…)

    Regarding iteration: _feedback_ is a good term to use, since tight feedback loops are vital to rapid iteration. However, certain forms of feedback _require_ strictness (type systems being a good example). So, setting up strictness and feedback in opposition to each other doesn’t seem like a useful way to pursue our goal of providing less hassle and more livablity in our systems.

    “no” is the only valid feedback in some cases. Saying “no” is feedback to the designer that their input is non-computable. Saying “no” is failing loudly, and failing early, when the designer’s attention is on design and diagnostics, instead of (potentially) days/weeks/years down the road when they’ve moved on to other projects, and must do a mental full context switch before fixing a problem. Saying “no” can help the designer fall into the pit of quality.

    I’m very exited to see what sort of workflow is enabled by optional typing, but I can’t agree that runtime type strictness has few gains. Programs fail at run time, and no matter how well we test and type-check at design time, it’s almost impossible to account for every failure mode. When (not if) a program fails, diagnostics are much more efficient if the type information is available. In fact, diagnostics are almost always part of my design workflow, so I would argue that tight feedback diagnostic feedback loops are just as important as tight design feedback loops.

  9. Caleb
    Posted April 21, 2012 at 9:58 am | Permalink

    …aaaand it appears my “valleygirl” tag was stripped out. *sigh* My kingdom for an edit button.

  10. Matt
    Posted April 22, 2012 at 5:55 am | Permalink

    WRT Strictness: strictness while developing an “application” is wonderful. I want to know every missed comma, quote, and closing tag when I’m actually writing the HTML. We just need to be able to turn that off when we push the “publish” button.

    This seems to be what Austin’s doing too. He (?) mentions “XHTML 1.1″ in context of _developing_ HTML applications. I don’t think he’s actually publishing anything using an XHTML doctype. This sounds like a sane practice to me.

  11. Posted April 22, 2012 at 10:22 am | Permalink

    Caleb: I edited the comment to add the valleygirl tag back in = )

    Matt: I don’t know of any system that re-works XHTML for HTML today other than the reliable parsing of browsers. I.e., do you have a system that fixes <br/ and turns it into <br? I’d love to see such beasts and hear how people get on with them.

    My ideal system would be an editor in the web inspector that has something like jshint for HTML embedded, that way I’d be able to see the real-time effects while getting feedback based on (customized) rules. Those rules should never be an exception (the thing won’t load), but instead should be highlights and advice.

  12. Caleb
    Posted April 22, 2012 at 6:37 pm | Permalink

    Thanks, Alex :) The addition of the opening tag (which I originally omitted) seems quite humorously relevant to the discussion, since I was trying to make a point about strictness in a meta sort of way (are opening tags really a *requirement* for HTML to be parsable?)

  13. Posted May 10, 2012 at 10:17 am | Permalink

    I’d like to add a follow up to my comment quoted.

    Something my team has to deal with on a daily basis is multi-language, multi-region contexts for presentation, data and business logic. We also run a/b/m-v tests continuously and beta test site re-designs and new features regularly on multiple sites.

    This may not be a typical scenario but I’m sure there are others who share our pain points.

    MDV looks good but how long do we wait?

    In the meanwhile we are looking to implement server and client templating using the same model (JSON) and the same markup (Mustache spec) but thinking seriously about whether the server side app is needed at all (outside of SERP considerations).

  14. Posted May 10, 2012 at 10:22 am | Permalink

    BTW companies like Strangeloop and Blaze.io make a business out of doing targeted optimizations for markup, CSS and JavaScript using a proxy/cache system that optimizes from the origin and then pushes out to the edge.

One Trackback

  1. By Four short links: 20 April 2012 | Share Blog on April 24, 2012 at 10:25 am

    […] Javascript All The Way Down (Alex Russell) — points out that we’re fixing so much like compatibility, performance, accessibility, all this stuff with Javascript. We’re moving further and further from declarative programming and more and more back to the days of writing heaps of Xlib or Motif toolkit code to implement our UIs and apps. […]