View-Source Is Good? Discuss.

I’ve been invited by Chris Messina and some kindly folks at MSFT to participate in a panel at this year’s SxSW regarding the value and/or necessity of view-source, and so with apologies to my fellow panelists, I want to get the conversation started early.

First, my position: ceteris paribus, view-source was necessary (but not sufficient) to make HTML the dominant application platform of our times. I also hold that it is under attack — not least of all from within — and that losing view-source poses a significant danger to the overall health of the web.

That’s a lot to hang on the shoulders of a relatively innocent design decision, and I don’t mean to imply that any system that has a view-source like feature will become dominant. But I do argue that it helps, particularly when coupled with complementary features like reliable parsing, semantic-ish markup, and plain-text content. Perhaps it’s moving the goal line a bit, but when I talk about the importance of view-source, I’m more often than not discussing these properties together.

To understand the importance of view-source, consider how people learn. Some evidence exists that even trained software engineers chose to work with copy-and-pasted example code. Participants in the linked study even expressed guilt over the copy-paste-tweak method of learning, but guilt didn’t change the dynamic: a blank slate and abstract documentation doesn’t facilitate learning nearly as well as poking at an example and feeling out the edges by doing. View-source provides a powerful catalyst to creating a culture of shared learning and learning-by-doing, which in turn helps formulate a mental model of the relationship between input and output faster. Web developers get started by taking some code, pasting it into a file, saving, loading it in a browser and hitting ctrl-r. Web developers switch between editor and browser between even the most minor changes. This is a stark contrast with technologies that impose a compilation step where the process of seeing what was done requires an intermediate step. In other words, immediacy of output helps build an understanding of how the system will behave, and ctrl-r becomes a seductive and productive way for developers to accelerate their learning in the copy-paste-tweak loop. The only required equipment is a text editor and a web browser, tools that are free and work together instantly. That is to say, there’s no waiting between when you save the file to disk and when you can view the results. It’s just a ctrl-r away.

With that hyper-productive workflow as the background, view-source helps turn the entire web into a giant learning lab, and one that’s remarkably resilient to error and experimentation. See an interesting technique or layout? No one can tell you “no” to figuring out how it was done. Copy some of it, paste it into your document, and you’ll get something out the other side. Browsers recovering from errors gracefully create a welcome learning environment, free of the inadequacy that a compile failure tends to evoke. You can see what went wrong as often as not. The evolutionary advantages of reliable parsing have helped to ensure that strict XML content comprises roughly none of the web, a decade after it was recognized as “better” by world+dog. Even the most sophisticated (or broken) content is inspectable at the layout level and tools like Firebug and the Web Inspector accelerate the copy-paste-tweak cycle by inspecting dynamic content and allowing live changes without reloads, even on pages you don’t “own”. The predicate to these incredibly powerful tools is the textual, interpreted nature of HTML. There’s much more to say about this, but lets instead turn to the platform’s relative weaknesses as a way of understanding how view-source is easily omitted from competing technologies.

The first, and most obvious, downside to the open-by-default nature of the web is that it encourages multiple renderers. Combined with the ambiguities of reliable parsing and semantics that leave room for interpretation, it’s no wonder that web developers struggle through incompatibilities. In a world where individual users each need to be convinced to upgrade to the newest version of even a single renderer, differences only in version can wreak havoc in the development process. Things that work in one place may not look exactly the same in another. This is both a strength and a weakness for the platform, but at the level of sophisticated applications, it’s squarely a liability. Next, ambiguities in interpretation and semantics mean that the project of creating tooling for the platform is significantly more complex. If only one viewer is prevalent (for whatever reason), then tools only need to consume and generate code that understands the constraints, quirks, and performance of a single runtime. Alternate forms of this simplification include only allowing code (not markup) so as to eliminate parsing ambiguity. The code-not-markup approach yields a potentially more flexible platform and one that can begin to execute content more quickly (as Flash does). These advantages, taken together, can create an incredibly productive environment for experts in the tools that generate content: no output ambiguity, better performance, and tools that can deliver true WYSIWYG authoring. These tools can sidestep the ctrl-r cycle entirely.

But wait, I hear you shout, It’s possible to do code-only, toolable, full fidelity development in JavaScript! Tools like GWT and Cappuccino generate code that generates UI, ensuring that only those who can write code or have tools that can will participate; removing the potential value of view-source for those apps. But lets be honest: view source is nearly never locally beneficial. I can hardly count the number of times I’ve seen the “how do I hide my code?” question from a web n00b who (rightly or wrongly) imagines there’s value in it. For GWT the fact that the output is an HTML DOM that’s styled with CSS is as much annoyance as benefit. The big upside is that browsers are the dominant platform and you don’t have to convince users to install some new runtime.

Similarly Flex, Laszlo, GWT’s UI Binder, and Silverlight have discovered the value in markup as a simple declarative way for developers to understand the hierarchical relationships between components, but they correspond to completely unambiguous definitions of components they rely on compiled code — not reliably parsed markup — for final delivery of the UI. These tight contracts turn into an evolutionary straightjacket. Great if you’re shipping compiled code down the wire that can meet the contract, but death if those tags and attributes are designed to live for long periods of time or across multiple implementations. You might be able to bolt view-source into the output, but it’ll always be optional and ad-hoc, features that work against it being pervasive. Put another way, the markup versions of these systems are leaky abstractions on the precise, code-centric system that under-girds both the authoring and runtime environments. This code-centric bias is incredibly powerful for toolmakers and “real” developers, but it cuts out others entirely; namely those who won’t “learn to program” or who want to build tools that inject content into the delivery format.

Whatever the strengths of code-based UI systems, they throw web crawlers for a loop. Today, most search engines deal best with text-based formats, and those search engines help make content more valuable in aggregate than it is on its own. Perhaps it’s inevitable that crawlers and search engines will need to execute code in order to understand the value of content, but I remain unconvinced. As a thought experiment, consider a web constructed entirely of Flash content. Given that Flash bytecode lacks a standard, semantic way to denote a relationship between bits of Flash content, what parts of the web wouldn’t have been built? What bits of your work would you do differently? What would the process be? There’s an alternate path forward that suggests that we can upgrade the coarse semantics of the web to deal with ever-more-sophisticated content requirements. Or put another way, use the features of today’s toolkits and code generators as a TODO list for markup driven features. But the jury is still out on the viability that approach; the same dynamic that makes multiple renderers possible ensures that getting them to move in a coordinated way is much harder than the unilateral feature roadmap that plugin vendors enjoy. HTML 5 and CSS 3 work is restarting those efforts, but only time will tell if we can put down the code and pick markup back up as a means to express ourselves.

I’ve glossed over a lot of details here, and I haven’t discussed implications for the server side of a non-text format as our lingua-franca, nor have I dug into the evolution metaphor. Many of the arguments are likewise conditional on economic assumptions. There’s lots of discussion yet to have, so if you’ve got links to concrete research in either direction or have an experience that bears on the debate post in the comments! Hopefully my fellow panelists will respond in blog form and I’ll update this post when they do.


  1. Posted January 7, 2010 at 9:13 pm | Permalink

    from your post to openweb-group:
    1.) I learned html and css, mostly from view-source
    2.) I use view-source about daily to see how something works, scape some text cleanly, or check how the CMS/blog engines I use build code.
    3.) it would be nice to see vendors subscribe to W3C-compliance. All the efforts that Web developers make in work-arounds to browser incompatibility is wasteful
    4.) Rapid response time will win my vote in browser wars and Google is listening
    I digress, I like view-source, see you at SxSW. Best, Jim

  2. Posted January 7, 2010 at 10:57 pm | Permalink

    Thanks for that.

    I’ve given talks on the open web a couple of times, and tried to use the phrase ‘lazy text’ to describe what’s good about view source. That is to say it’s not just the ability to view the source, it’s also, as you say, the ability to learn and play in a lazy way that counts.

  3. Joeri
    Posted January 8, 2010 at 5:34 am | Permalink

    You don’t need view source to be beginner-friendly. Visual basic is one of the most popular beginner languages of all times, and it didn’t have view source. What you need for beginners is a lot of documentation and example code, and that can be delivered just as easily for platforms that don’t have view source.

    There’s a big difference between HTML view source and DOM inspection. HTML view source is not useful most of the time, while DOM inspection is definitely useful. DOM inspection tells you how they did things, not how the page was initially received by the browser. You don’t need a markup-driven delivery system for that. A DOM constructed from javascript is just as inspectable as one constructed from HTML. Even flash can be inspected at run-time by navigating it’s display object hierarchy.

  4. Posted January 8, 2010 at 7:10 am | Permalink

    HTML/CSS/JavaScript are still immature technologies to me. Look at how fast they evolve:
    – HTML5 is almost out (geolocation, websockets, etc.)
    – CSS3, especially with animations, is very promising (
    – ECMAScript continues to evolve (

    There are so much contributors that following one set of guidelines (from W3C?) as we were learning from Borland with OWL or from Microsoft with MFC does not apply to the (standard) Web application field! And this is without counting the vast amount of wonderful implementations we need to learn from to improve our applications…

    Here is a simple situation:
    – You want to forward a link on a public message you posted on Twitter (a URL like:<message_id&gt;)
    – You look for the id of a specific Twitter Public Message.
    – You can:
    1) Use the Twitter API to search for it
    2) Use “view-source” or a DOM inspector to read the identifier of the <li/> tag enclosing the message (something like: status_7449266450)
    – Personally, I prefer the second one ;)
    – You can now copy the identifier and forward the URL:
    – If needed, I can now write a GreaseMonkey script to generate the link automatically.

    View-source: a must have! ;)
    A+, Dom

  5. Posted January 8, 2010 at 7:48 am | Permalink

    Great post, thanks. It got me to thinking about how I landed as a web dev… for almost 10 years, my work was primarily done in the old Visual Basic (5/6), where the same rapid development cycle (edit/view/update/refresh/lather/rinse/repeat) holds true.

  6. Renate DeRoch
    Posted January 8, 2010 at 9:57 am | Permalink

    Most of what I know now was learned using view source or examples on the web. If you are attempting to do something you haven’t done before, the best way to learn it is by seeing how it’s done. I’m a huge believer in the copy-paste-tweak method. I have never learned anything truly useful from programming books. They never seem to address the problem I’m trying to answer. The web has always been my reference shelf.

  7. Posted January 8, 2010 at 10:36 am | Permalink


    I agree that copius example code *can* be made available on other platforms, but I’m not sure that I’d agree with “as easily”. The web makes it the default. Every other technology makes it a special case. Defaults matter.


  8. Joeri
    Posted January 8, 2010 at 11:20 am | Permalink

    Defaults matter, this is true, but I think you overestimate the value of those defaults. In real-world web apps and websites there’s so much code that isn’t related to the snippet you need, that most of the time it’s easier to find a separate example than to dig into some app with firebug to see how they did things. Live code is NOT the same thing as example code (and many web SDK’s make this mistake).

    I cut my programming teeth on Visual Basic 3, and I doubt I would have learned any faster had VB3 apps offered a view source feature. To be fair, I did learn CSS by looking at existing sites, but then CSS is under no threat from becoming less accessible to inspect, especially with tools like firebug.

    But maybe the problem is that I have an app-centric perspective, building web apps, not web sites. Maybe if all someone wants to do is learn how to build static web sites or non-ajax web apps, looking at the source of can teach them a thing or two.

  9. Felipe
    Posted January 8, 2010 at 11:45 am | Permalink

    Great post, Because I can see the code I’ve learned to create html and css. For me it is an important part of learning.

  10. Yamaban
    Posted January 8, 2010 at 4:38 pm | Permalink

    Maybe View-Source is NOT for everybody and their dog, but as a Debugger and Hot-Spotter it’s a tool I can’t do without.
    Maybe View-Source along with some other features should be moved to an “Expert-mode” that can be en-/disabled through Options-Dialog, but not removed.
    It’s a tool. like any other tool it can be abused. Same for a Hammer or a Screwdriver. The tool itself is neither good or bad, but you’ll miss it when needed and not available.

  11. Posted January 8, 2010 at 6:31 pm | Permalink

    View source is one of the three most important tools in my web-dev belt (the other two are a text editor and a browser).

    I feel that view source is of critical importance to the web both historically and in the future.

    I’ve blogged a more detailed response to your post:

    And I would like to start a movement:

    Save View Source!

  12. Posted January 9, 2010 at 6:47 am | Permalink

    “view source”is a must for not only learning,but for innovation in technology. I think this is the same thing as talking of opensource and closed source technologies. The more its open the more likely it will grow and shape.

  13. Posted January 9, 2010 at 8:24 am | Permalink

    View-Source solves “problems” for web-developers and designers they didn’t even know that they had them. It acts as some sort of instant-gratification (of curiosity) as well as being an inspirational source.

    I [heart] View-Source.

  14. Mr. Nguyen
    Posted January 10, 2010 at 8:53 pm | Permalink

    You should take more time to write this article shorter!

  15. Posted January 11, 2010 at 4:17 am | Permalink

    generating SVG with PHP, copy-pasting from other SVG (or CSS, JavaScript), debugging with view-source, ctrl-F-ing the content, searchbot-ting the source. View-Source is Good!

  16. Ray Cromwell
    Posted January 11, 2010 at 4:54 am | Permalink

    GWT does not preclude progressive enhancement/markup, that’s a false dichotomy. It’s freedom and limitations are much the same as Closure Compiler. You can design a Closure-like, Dojo-like, or jQuery-like library for GWT if you wish. See GwtQuery for example, but even with the built-in GWT Widget library you can do layout in HTML and overlay/wrap Widget code on top.

    Secondly, I would say that in the long term (next decade or two), it will become increasingly difficult to force every programmer in the world to write their apps in Javascript in a text editor, so it is better to accept the fact that people will use other languages and compilers, translators, or code-gen tools, and design mechanisms around this to preserve “view source” than to fight tooth and nail for Javascript. I dunno, whenever this discussion comes up, I feel like there’s an underlying language war aspect.

    This only gets worse with stuff like NaCL and other extensions various vendors are adding to the Web. I would like to propose another panel, which is to discuss the role of the browser as general purpose execution environment (e.g. Chrome OS, NaCL, et al), how that relates to developer expectations towards languages and tools. With more and more of the world’s information economy and services moving to the web, I think it’s unreasonable to expect mono-culture in tools and methods.

    One can however have agreed upon best practices. Browsers, for example, can still support “view source” even if they’re looking at obfuscated JS for execution. There’s no a priori reason that what is downloaded and executed has to be the same as what is shown in View Source. One could theoretically support “View Source” for C++ code used to generate a native client executable.

  17. Posted January 11, 2010 at 8:23 am | Permalink

    +1 to everything Ray said.

    I feel like this is a legitimate discussion, but it’s somewhat mischaracterized. We all (I believe) agree on the importance of the web as a platform. The tension is between two different methods of site construction — page-like and application-like, for wont of better terms.

    Page-like sites tend to be content-oriented. And for these, straightforward markup, potentially with a bit of js to spiff them up, is a sensible approach. This is what we often tend to think of as “the web”, and I believe is what you’re defending here. I don’t disagree.

    Application-like sites tend to be, well, applications. They’re complex interactive UIs that work without page reloads. While this may not be the right approach for most “sites”, it’s absolutely invaluable for things like Gmail, Maps, Calendar, Docs, Spreadsheets, AdWords, Wave, and many other apps. Semantic markup with simple script is entirely inappropriate in these cases.

    Most tools, be they GWT, Dojo, Closure, jQuery, or what have you, can be used either way. The decision has little to do with your tool chain, and everything to do with the kind of site you’re building. To wit — Dojo’s mail demo ( is constructed in “application style” (i.e., it’s entirely programmatically constructed and contains zero semantic information in its source) — but Dojo certainly doesn’t require that you work that way, any more than other tools do. That’s not a criticism of Dojo, though. Dojo’s design clearly recognizes that both approaches are sensible.

    As to the evolutionary forces enabled by “view source”, I couldn’t agree more. But as applications become more complex, we need more powerful tools to achieve the same effect. And I think the best known solution at present is to aggressively open-source whenever possible. While this remains an uphill battle at many companies, I’m seeing more and more companies that recognize the value of releasing their code. It might require more work, but I believe that it’s the only viable long-term solution to the kind of information sharing that made the web what it is today.

  18. Frymaster
    Posted January 11, 2010 at 11:28 am | Permalink

    As a user, I find cmnd+U useful in another way. Various blog themes have the comment tag rules commented out of the code. If the blog doesn’t choose to post the rules, I view-source.

    In regard to learning, view-source has pretty much been my only teacher though my page-site-only skills are negligible in present company.

    Kind of surprised that you’re kind of surprised about “some evidence” that copy-paste-tweak is faster than from scratch. Them UseIt people learned me that in 2000


  19. Posted January 11, 2010 at 12:47 pm | Permalink

    Ray, Joel:

    Just to be clear, I wasn’t meaning to dump on or impugn GWT in any way. Yes, GWT *can* be used for progressive enhancement. So can the Closure compiler. So can Dojo. My point here is that to varying extents, they all defeat view-source because they *assume* that code will be necessary to deliver behavior. As you note, NaCL is just the most extreme version of this.

    There’s a premise to your response that apps (not pages) will continue to need large gobs of code in order to deliver anything like a usable UI, and it’s that assumption that I’m trying to question. It’s not about progressive enhancement or language, it’s about how much of the page is “invisible” to inspection. I view big Dijit-based apps, GWT apps, and NaCL-driven apps as indistinguishable in this respect.

    Consider an alternate future; one where there’s a way to declare a standard data source, hook it up to a <datagrid> or a <tree> declaratively, and have common-case rules for most editing and UI idioms built into the browser with extension points via the DOM. That’s a spiderable outcome, and one that will bless apps that use it with a massive performance win: you’ll no longer need to ship down the definitions for your grid, tree, or data source to the client. Yes, some code will be necessary for custom behavior or apps so huge that they blow out the limits of even an upgraded platform’s semantics, but I think that it’s possible that many types of apps could avoid either plugins or compilers to JS were the platform to get this capable. The Dijit mail demo could certainly be done without much code were that world available today.

  20. LX
    Posted January 11, 2010 at 1:08 pm | Permalink

    View Source? Come on, try firebug (or Dragonfly, Apple/Google/MSIE Developer Tools, but all of those are not really a match).

    Of course, view source can be helpful to see what the server really delivered, but you could easily hide your whole webpage to the eyes of CTRL-u by loading it via JS. You can’t hide it to those developer tools.

    Greetings, LX

  21. Posted January 11, 2010 at 3:03 pm | Permalink

    I also have made large use of view source. I find it personally to be a good feature of browsers.

    If a browser chooses not to implement a view-source option, how is that a bad thing? I personally won’t be using that browser, but depending on it’s popularity with the masses I would develop for it.

    I do agree that all browsers that would like to be taken seriously need to be standards compliant and have high powered debugging resources such as firebug and the like (this means you IE!).

    Now if the argument was to be over obfuscating source or making browsers function on binary data, I would have many, many arguments with that… I enjoy reading others code and it makes me more conscious of my own coding style knowing it can be viewed by others.


  22. Posted January 12, 2010 at 6:26 am | Permalink


    Thanks for the clarification. I didn’t mean to imply you were impugning our work, but rather wanted to clarify exactly which two world-views are being juxtaposed here, and suggest that these views are orthogonal to the particular tools we use.

    I think I see your point now. To make sure I understand the design you’re sketching out here — we’re not talking about shipping all data “in the static page”, but rather through some sort of standard interface for retrieving data post-load. So we would need standards for the data format, as well as for the “controls” (tables, trees, what-have-you) for rendering said data. And you’d need something like form-posts-on-steroids for gathering data from the user and updating the states (such as queries) of these data-bound controls. This would be a sea-change from the way browsers currently work, but it certainly doesn’t seem intractable.

    Where I become rather more bearish about this scenario is when we start talking about trying to bake these sorts of standards into the browser, presumably in seldom-updated C++ code. It’s hard to prove this, but my intuition tells me that it would be difficult or impossible in practice to agree on sufficiently general designs for these common widgets, especially if they progress at the “speed of browser update”.

    Perhaps we’d be much better off trying to agree on definitions for the data formats we use, the mechanisms for retrieving them, and the semantics of their structures. This seems imminently more tractable, and I believe would have precisely the same effect on crawlability (to be clear, this presents opportunities for spamming if there’s any code involved, but that’s a problem either way).


  23. Posted January 12, 2010 at 11:51 am | Permalink

    Hey Joel,

    I think that we’ve come to expect far too little of browsers, but for good reasons. You and I have both spent some big chunk of the last decade stepping into the breach when browsers collectively suffered a massive market failure and a subsequent failure of imagination that came with lowered expectations. There was a time in the late 90’s when web browsers shipped new versions so fast that web developers couldn’t keep up, when new and seemingly daunting features got multiple implementations in what now feels like no time at all. All of that progress depended on “seldom-updated C++”. I think we need to focus more on what “seldom” means and what turns seldom into frequent.

    It’s pretty clear that at the functioning end of the browser market, we’ve got browsers that are moving MUCH faster than they were even 2 years ago. Nobody, more or less, is still using Firefox 2, Safari 3, or Chrome 2. The same can’t be said for IE 6. So our problem then isn’t that it’s C++, it’s that the market is broken for getting certain users to upgrade, which is both in their own interests and in the interests of the folks who would like to develop better sites for them. I have real hope that we can crack that nut with Chrome Frame. Most of the market works, we just need to address the parts that aren’t moving and do it in ways that aren’t traumatic for people. Dojo and GWT have been good intermediate steps in the interim. Even if GCF doesn’t succeed, the dynamics will be the same as they have been: getting to a world where tools like Dojo and GWT can perform better (relatively speaking) is going to require that non-JITing VM’s get retired. The only way we there there is through “seldom-updated C++”. The price of progress is the same either way, and I find it odd that folks who are effectively performing the role of browser intercessor are unwilling to acknowledge as much. There will be much gnashing of teeth in the JS toolkit world as progress makes them obsolete. The fear is understandable: ceding control back to a pile of code you don’t control always has risks. I, however, can’t freaking wait.

    As for the features themselves, I’m not asking for anything more than good toolkits already provide. In some cases, it’s stuff that the browsers have laying around; e.g. XUL already has most of this. Anyway, we don’t really even need “good” solutions, we only need workable ones with enough extension points to let scripters fill in the gaps. The web has proven that much over and over. Just ask a print designer if they could live without real style sheets, a true WYSIWYG design environment, their custom font collection, and precise layout control. They were wrong too. So it goes.


  24. Posted January 12, 2010 at 1:09 pm | Permalink

    I’m not a web developer, so I’ll cast this comment as a question to those who are. Is it my imagination or are some web sites increasingly obscuring the page source? For example, some of Google’s pages are fine for machine-reading but are nearly inscrutable for human-reading. If true, then I’d say there are two compelling reasons to strengthen view source requirements. One is the powerful learning mechanism discussed in this post. But the other is transparency, for want of a better word.

  25. Posted January 13, 2010 at 6:10 pm | Permalink

    Count me in on the “learned code by view-source” crowd. Closing out view-source is a gate keeper mentality. Precisely the opposite of what has shaped the Internet into the dynamic place of knowledge exchange that is.

  26. Posted January 14, 2010 at 6:32 am | Permalink

    View source is great. I’m a graphic designer rather than web-dev, but was recently tasked with creating an html email. Clueless of where to start with the coding, view source taught me a lot and gave me a lot of hints.

    It’s invaluable to those of us new to html or just plain curious as to how something we like on the web was made.

  27. Ray Cromwell
    Posted January 16, 2010 at 4:19 pm | Permalink

    Alex, thanks for your response. What do you think of XForm’s architecture (minus the irritation of XHTML)? You can actually do something like the Mail app example 100% declaratively using it. Although these days, I’d redesign XForms into something based on JSON and/or CSS selectors, maybe JSON Forms, so that can “wire up” HTML elements to data structures via simple data-binding queries. The datagrid/tree example is a good one, what I liked about xforms is that it added a few other things: 1) repeat loops to stamp out templated UI 2) a switch/case UI element 3) terse query language for data-binding 4) vocabulary of declarative events (extended DOM events) to set up simple behavior triggering and 5) ways to assert validation, constraints, and computed fields.

    Maybe for HTML6 they can steal some of these ideas. In any case, I think these can work harmoniously with GWT/Dojo/et al.

  28. Posted January 17, 2010 at 11:40 am | Permalink


    First, I agree that the problem isn’t that browsers are implemented in C++. The problem is that they are updated slowly. So let’s start by ignoring IE for the moment, on the assumption that Chrome Frame can deal with that problem. So for each batch of new features, we need only wait on Chrome, Safari, Firefox, and (perhaps) Opera to be updated. And Safari Mobile. And the Android browser, along with perhaps Opera Mobile and whatever forked WebKit Nokia is shipping these days. So while that’s a relatively frequently-updated group of browsers, I don’t think it’s unreasonable to say that there’s a minimum of one year lag time between specification (or first implementation) and broad penetration. And by broad, I’m talking about ~95% of the market — which of course implies breaking 1 out of 20 customers. These rough numbers are certainly debatable, but I don’t think they’re off by a huge amount.

    So what’s the big deal? Around a year doesn’t seem too bad. But there’s another problem lurking, which gets to the root of my concern: The way I read your suggestion, the “features” we’re talking about are high-level semantics, not low-level building blocks. Let’s say HTML 5.x gets a “data grid” element. Then that hits most browsers a year or so from now. I guarantee that it will be insufficiently generalized for some common use case, because that’s an exceedingly difficult problem. So I can’t really use it yet, or at the least it requires a lot of hacking to work around the things it doesn’t do well. I push for changes, which get adopted at some point, followed by another year or so of waiting on browsers to be updated.

    Even if this particular element eventually, asymptotically, meets my needs, we’re designing a system by adding more and more special cases, which I would argue is a recipe for disaster. What we *should* be doing is building the basic blocks upon which a higher-level system can be sensibly built. Special cases (all the wacky ones like border-image in CSS3 come to mind) that can be specified in terms of lower-level primitives belong in library code, not baked into the browser.

    In a nutshell, I suppose what I’m arguing is that you can’t build a complex system by piling on special-cases, and that you can’t escape code for non-trivial applications. I believe that interoperability (be it for crawling, data transfer, or what have you) is best handled by different standards, not by forcing developers to conflate their UI with their data. The web-as-static-pages has worked well thus far, but it’s starting to be strained pretty badly, and I’m unwilling to bet the future on a spiffed-up 3270 terminal :)


  29. Posted January 22, 2010 at 7:27 pm | Permalink

    Hey Joel,

    Thanks for taking the time to respond. Sorry for being so late in replying.

    So I understand and acknowledge your concern, but I’d submit that it’s borne of constraints and requirements that aren’t common. Indeed, you’re making a compelling case that the web badly serves uncommon-cases, and for what it’s worth, I tend to agree. For the special cases, tools should absolutely continue to fill the gap and I think there’s a bright future for the Dojo’s and the GWT’s of the world in that niche. Where I part ways with your perspective is how it prescribes how future work should be prioritized.

    You’re (maybe implicitly) arguing that browser vendors should spend more time making it possible to handle all edge cases in lieu of building affordances for common cases. This is a code-centric view of the world, and one I strongly disagree with. Folks who can code are already enfranchised. They barely need the help, and when they do, it tends to be a question of taste and effort, not of possibilities. Yes, their opportunities to express their ideal is constrained, but that’s the price of a truly ubiquitous runtime.

    In any case, I argue it’s not worth abandoning the benefits of view-source to get there anyway. Big, knobby, semantic-ish forms of agreement * explicitly* require that we give things up in order to get the benefits of searchability, reliability, and ubiquity that HTML has provided. Put another way, there are a lot of reasons that the era of C++ desktop apps is drawing to a close, but chief among them is that the price of the flexibility was so high that it had the net effect of stunting the pool of participants. That’s a losing hand in a world where we can keep throwing more transistors at the problem, year over year.


12 Trackbacks

  1. […] Russell has pontificated on the notion that View Source is not only good and important, but that it may be under […]

  2. By ~robcee/ – View-Source IS Good. Full-stop. on January 8, 2010 at 6:44 am

    […] saw a tweet this morning from Joe Walker linking to this article asking Is View-Source Good? from Alex Russell of Dojo fame and I had to write about it. It’s something I’ve been […]

  3. […] Russell poses the question that few probably ever think to ask: Is View Source good? Rob Campbell replies "of […]

  4. By View-Source Is Good! – A Frog in the Valley on January 8, 2010 at 12:33 pm

    […] REad it all at View-Source Is Good? Discuss. | Continuing Intermittent Incoherency […]

  5. By View-Source Is Good? Discuss. « LostFocus on January 11, 2010 at 10:01 am

    […] View-Source Is Good? Discuss. […]

  6. By Experiments with audio, part VII on January 11, 2010 at 1:30 pm

    […] a couple of hours speaks to how well this stuff was written in the first place.  There’s a good discussion going right now about View Source, and how important it is.  What we’re doing in these experiments is View Source all the way […]

  7. By links for 2010-01-11 | Grant Watson on January 11, 2010 at 3:01 pm

    […] View Source Is Good? Discuss. View-source provides a powerful catalyst to creating a culture of shared learning and learning-by-doing, which in turn helps formulate a mental model of the relationship between input and output faster. (tags: programming web development education html source article) […]

  8. By B. BARIS » Appropriation, part deux on January 14, 2010 at 6:46 am

    […] at examples of working code, and appropriate modules to use in their own projects (evidenced by: dojotoolkit). It’s probably one of the top reasons why open-source is sparking a wildfire of prolificness […]

  9. By Long live View Source! | Wisdump on January 14, 2010 at 6:48 pm

    […] Source movement has been formed this early. The discussion is sparse, but Alex Russell elegantly explains why View Source matters, also reminding me why I love developing on the […]

  10. By Blayne Sucks » Blog Archive » Thoughts on the iPad on January 31, 2010 at 10:46 am

    […] see the source used to render the page you’re viewing if you’re on Flash-based website. View-Source is a good thing. Closed-source development for non-differentiating infrastructure is a bad […]

  11. By View-Source Follow-Up | Infrequently Noted on March 17, 2010 at 12:23 pm

    […] flow of control. De-obsfucators have no hope in this strange land. I argued in the panel and in the comments of my last post on the topic that when we get to this place with JavaScript the product is functionally indistinguishable from a […]

  12. […] Alex “Infrequently Noted” […]