Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Comments for View-Source Is Good? Discuss.

Hey Joel,

Thanks for taking the time to respond. Sorry for being so late in replying.

So I understand and acknowledge your concern, but I'd submit that it's borne of constraints and requirements that aren't common. Indeed, you're making a compelling case that the web badly serves uncommon-cases, and for what it's worth, I tend to agree. For the special cases, tools should absolutely continue to fill the gap and I think there's a bright future for the Dojo's and the GWT's of the world in that niche. Where I part ways with your perspective is how it prescribes how future work should be prioritized.

You're (maybe implicitly) arguing that browser vendors should spend more time making it possible to handle all edge cases in lieu of building affordances for common cases. This is a code-centric view of the world, and one I strongly disagree with. Folks who can code are already enfranchised. They barely need the help, and when they do, it tends to be a question of taste and effort, not of possibilities. Yes, their opportunities to express their ideal is constrained, but that's the price of a truly ubiquitous runtime.

In any case, I argue it's not worth abandoning the benefits of view-source to get there anyway. Big, knobby, semantic-ish forms of agreement * explicitly* require that we give things up in order to get the benefits of searchability, reliability, and ubiquity that HTML has provided. Put another way, there are a lot of reasons that the era of C++ desktop apps is drawing to a close, but chief among them is that the price of the flexibility was so high that it had the net effect of stunting the pool of participants. That's a losing hand in a world where we can keep throwing more transistors at the problem, year over year.


by alex at
I'm not a web developer, so I'll cast this comment as a question to those who are. Is it my imagination or are some web sites increasingly obscuring the page source? For example, some of Google's pages are fine for machine-reading but are nearly inscrutable for human-reading. If true, then I'd say there are two compelling reasons to strengthen view source requirements. One is the powerful learning mechanism discussed in this post. But the other is transparency, for want of a better word.
View source is great. I'm a graphic designer rather than web-dev, but was recently tasked with creating an html email. Clueless of where to start with the coding, view source taught me a lot and gave me a lot of hints.

It's invaluable to those of us new to html or just plain curious as to how something we like on the web was made.

by Fiona at
Hey Joel,

I think that we've come to expect far too little of browsers, but for good reasons. You and I have both spent some big chunk of the last decade stepping into the breach when browsers collectively suffered a massive market failure and a subsequent failure of imagination that came with lowered expectations. There was a time in the late 90's when web browsers shipped new versions so fast that web developers couldn't keep up, when new and seemingly daunting features got multiple implementations in what now feels like no time at all. All of that progress depended on "seldom-updated C++". I think we need to focus more on what "seldom" means and what turns seldom into frequent.

It's pretty clear that at the functioning end of the browser market, we've got browsers that are moving MUCH faster than they were even 2 years ago. Nobody, more or less, is still using Firefox 2, Safari 3, or Chrome 2. The same can't be said for IE 6. So our problem then isn't that it's C++, it's that the market is broken for getting certain users to upgrade, which is both in their own interests and in the interests of the folks who would like to develop better sites for them. I have real hope that we can crack that nut with Chrome Frame. Most of the market works, we just need to address the parts that aren't moving and do it in ways that aren't traumatic for people. Dojo and GWT have been good intermediate steps in the interim. Even if GCF doesn't succeed, the dynamics will be the same as they have been: getting to a world where tools like Dojo and GWT can perform better (relatively speaking) is going to require that non-JITing VM's get retired. The only way we there there is through "seldom-updated C++". The price of progress is the same either way, and I find it odd that folks who are effectively performing the role of browser intercessor are unwilling to acknowledge as much. There will be much gnashing of teeth in the JS toolkit world as progress makes them obsolete. The fear is understandable: ceding control back to a pile of code you don't control always has risks. I, however, can't freaking wait.

As for the features themselves, I'm not asking for anything more than good toolkits already provide. In some cases, it's stuff that the browsers have laying around; e.g. XUL already has most of this. Anyway, we don't really even need "good" solutions, we only need workable ones with enough extension points to let scripters fill in the gaps. The web has proven that much over and over. Just ask a print designer if they could live without real style sheets, a true WYSIWYG design environment, their custom font collection, and precise layout control. They were wrong too. So it goes.


by alex at

Thanks for the clarification. I didn't mean to imply you were impugning our work, but rather wanted to clarify exactly which two world-views are being juxtaposed here, and suggest that these views are orthogonal to the particular tools we use.

I think I see your point now. To make sure I understand the design you're sketching out here -- we're not talking about shipping all data "in the static page", but rather through some sort of standard interface for retrieving data post-load. So we would need standards for the data format, as well as for the "controls" (tables, trees, what-have-you) for rendering said data. And you'd need something like form-posts-on-steroids for gathering data from the user and updating the states (such as queries) of these data-bound controls. This would be a sea-change from the way browsers currently work, but it certainly doesn't seem intractable.

Where I become rather more bearish about this scenario is when we start talking about trying to bake these sorts of standards into the browser, presumably in seldom-updated C++ code. It's hard to prove this, but my intuition tells me that it would be difficult or impossible in practice to agree on sufficiently general designs for these common widgets, especially if they progress at the "speed of browser update".

Perhaps we'd be much better off trying to agree on definitions for the data formats we use, the mechanisms for retrieving them, and the semantics of their structures. This seems imminently more tractable, and I believe would have precisely the same effect on crawlability (to be clear, this presents opportunities for spamming if there's any code involved, but that's a problem either way).

Cheers, joel.

View Source? Come on, try firebug (or Dragonfly, Apple/Google/MSIE Developer Tools, but all of those are not really a match).

Of course, view source can be helpful to see what the server really delivered, but you could easily hide your whole webpage to the eyes of CTRL-u by loading it via JS. You can't hide it to those developer tools.

Greetings, LX

by LX at
I also have made large use of view source. I find it personally to be a good feature of browsers.

If a browser chooses not to implement a view-source option, how is that a bad thing? I personally won't be using that browser, but depending on it's popularity with the masses I would develop for it.

I do agree that all browsers that would like to be taken seriously need to be standards compliant and have high powered debugging resources such as firebug and the like (this means you IE!).

Now if the argument was to be over obfuscating source or making browsers function on binary data, I would have many, many arguments with that... I enjoy reading others code and it makes me more conscious of my own coding style knowing it can be viewed by others.


Ray, Joel:

Just to be clear, I wasn't meaning to dump on or impugn GWT in any way. Yes, GWT can be used for progressive enhancement. So can the Closure compiler. So can Dojo. My point here is that to varying extents, they all defeat view-source because they assume that code will be necessary to deliver behavior. As you note, NaCL is just the most extreme version of this.

There's a premise to your response that apps (not pages) will continue to need large gobs of code in order to deliver anything like a usable UI, and it's that assumption that I'm trying to question. It's not about progressive enhancement or language, it's about how much of the page is "invisible" to inspection. I view big Dijit-based apps, GWT apps, and NaCL-driven apps as indistinguishable in this respect.

Consider an alternate future; one where there's a way to declare a standard data source, hook it up to a <datagrid> or a <tree> declaratively, and have common-case rules for most editing and UI idioms built into the browser with extension points via the DOM. That's a spiderable outcome, and one that will bless apps that use it with a massive performance win: you'll no longer need to ship down the definitions for your grid, tree, or data source to the client. Yes, some code will be necessary for custom behavior or apps so huge that they blow out the limits of even an upgraded platform's semantics, but I think that it's possible that many types of apps could avoid either plugins or compilers to JS were the platform to get this capable. The Dijit mail demo could certainly be done without much code were that world available today.

by alex at
As a user, I find cmnd+U useful in another way. Various blog themes have the comment tag rules commented out of the code. If the blog doesn't choose to post the rules, I view-source.

In regard to learning, view-source has pretty much been my only teacher though my page-site-only skills are negligible in present company.

Kind of surprised that you're kind of surprised about "some evidence" that copy-paste-tweak is faster than from scratch. Them UseIt people learned me that in 2000


by Frymaster at
You should take more time to write this article shorter!
by Mr. Nguyen at
generating SVG with PHP, copy-pasting from other SVG (or CSS, JavaScript), debugging with view-source, ctrl-F-ing the content, searchbot-ting the source. View-Source is Good!
by stelt at
GWT does not preclude progressive enhancement/markup, that's a false dichotomy. It's freedom and limitations are much the same as Closure Compiler. You can design a Closure-like, Dojo-like, or jQuery-like library for GWT if you wish. See GwtQuery for example, but even with the built-in GWT Widget library you can do layout in HTML and overlay/wrap Widget code on top.

Secondly, I would say that in the long term (next decade or two), it will become increasingly difficult to force every programmer in the world to write their apps in Javascript in a text editor, so it is better to accept the fact that people will use other languages and compilers, translators, or code-gen tools, and design mechanisms around this to preserve "view source" than to fight tooth and nail for Javascript. I dunno, whenever this discussion comes up, I feel like there's an underlying language war aspect.

This only gets worse with stuff like NaCL and other extensions various vendors are adding to the Web. I would like to propose another panel, which is to discuss the role of the browser as general purpose execution environment (e.g. Chrome OS, NaCL, et al), how that relates to developer expectations towards languages and tools. With more and more of the world's information economy and services moving to the web, I think it's unreasonable to expect mono-culture in tools and methods.

One can however have agreed upon best practices. Browsers, for example, can still support "view source" even if they're looking at obfuscated JS for execution. There's no a priori reason that what is downloaded and executed has to be the same as what is shown in View Source. One could theoretically support "View Source" for C++ code used to generate a native client executable.

by Ray Cromwell at
+1 to everything Ray said.

I feel like this is a legitimate discussion, but it's somewhat mischaracterized. We all (I believe) agree on the importance of the web as a platform. The tension is between two different methods of site construction -- page-like and application-like, for wont of better terms.

Page-like sites tend to be content-oriented. And for these, straightforward markup, potentially with a bit of js to spiff them up, is a sensible approach. This is what we often tend to think of as "the web", and I believe is what you're defending here. I don't disagree.

Application-like sites tend to be, well, applications. They're complex interactive UIs that work without page reloads. While this may not be the right approach for most "sites", it's absolutely invaluable for things like Gmail, Maps, Calendar, Docs, Spreadsheets, AdWords, Wave, and many other apps. Semantic markup with simple script is entirely inappropriate in these cases.

Most tools, be they GWT, Dojo, Closure, jQuery, or what have you, can be used either way. The decision has little to do with your tool chain, and everything to do with the kind of site you're building. To wit -- Dojo's mail demo ( is constructed in "application style" (i.e., it's entirely programmatically constructed and contains zero semantic information in its source) -- but Dojo certainly doesn't require that you work that way, any more than other tools do. That's not a criticism of Dojo, though. Dojo's design clearly recognizes that both approaches are sensible.

As to the evolutionary forces enabled by "view source", I couldn't agree more. But as applications become more complex, we need more powerful tools to achieve the same effect. And I think the best known solution at present is to aggressively open-source whenever possible. While this remains an uphill battle at many companies, I'm seeing more and more companies that recognize the value of releasing their code. It might require more work, but I believe that it's the only viable long-term solution to the kind of information sharing that made the web what it is today.

Count me in on the "learned code by view-source" crowd. Closing out view-source is a gate keeper mentality. Precisely the opposite of what has shaped the Internet into the dynamic place of knowledge exchange that is.

First, I agree that the problem isn't that browsers are implemented in C++. The problem is that they are updated slowly. So let's start by ignoring IE for the moment, on the assumption that Chrome Frame can deal with that problem. So for each batch of new features, we need only wait on Chrome, Safari, Firefox, and (perhaps) Opera to be updated. And Safari Mobile. And the Android browser, along with perhaps Opera Mobile and whatever forked WebKit Nokia is shipping these days. So while that's a relatively frequently-updated group of browsers, I don't think it's unreasonable to say that there's a minimum of one year lag time between specification (or first implementation) and broad penetration. And by broad, I'm talking about ~95% of the market -- which of course implies breaking 1 out of 20 customers. These rough numbers are certainly debatable, but I don't think they're off by a huge amount.

So what's the big deal? Around a year doesn't seem too bad. But there's another problem lurking, which gets to the root of my concern: The way I read your suggestion, the "features" we're talking about are high-level semantics, not low-level building blocks. Let's say HTML 5.x gets a "data grid" element. Then that hits most browsers a year or so from now. I guarantee that it will be insufficiently generalized for some common use case, because that's an exceedingly difficult problem. So I can't really use it yet, or at the least it requires a lot of hacking to work around the things it doesn't do well. I push for changes, which get adopted at some point, followed by another year or so of waiting on browsers to be updated.

Even if this particular element eventually, asymptotically, meets my needs, we're designing a system by adding more and more special cases, which I would argue is a recipe for disaster. What we should be doing is building the basic blocks upon which a higher-level system can be sensibly built. Special cases (all the wacky ones like border-image in CSS3 come to mind) that can be specified in terms of lower-level primitives belong in library code, not baked into the browser.

In a nutshell, I suppose what I'm arguing is that you can't build a complex system by piling on special-cases, and that you can't escape code for non-trivial applications. I believe that interoperability (be it for crawling, data transfer, or what have you) is best handled by different standards, not by forcing developers to conflate their UI with their data. The web-as-static-pages has worked well thus far, but it's starting to be strained pretty badly, and I'm unwilling to bet the future on a spiffed-up 3270 terminal :)

Cheers, joel.

from your post to openweb-group: 1.) I learned html and css, mostly from view-source 2.) I use view-source about daily to see how something works, scape some text cleanly, or check how the CMS/blog engines I use build code. 3.) it would be nice to see vendors subscribe to W3C-compliance. All the efforts that Web developers make in work-arounds to browser incompatibility is wasteful 4.) Rapid response time will win my vote in browser wars and Google is listening I digress, I like view-source, see you at SxSW. Best, Jim
You don't need view source to be beginner-friendly. Visual basic is one of the most popular beginner languages of all times, and it didn't have view source. What you need for beginners is a lot of documentation and example code, and that can be delivered just as easily for platforms that don't have view source.

There's a big difference between HTML view source and DOM inspection. HTML view source is not useful most of the time, while DOM inspection is definitely useful. DOM inspection tells you how they did things, not how the page was initially received by the browser. You don't need a markup-driven delivery system for that. A DOM constructed from javascript is just as inspectable as one constructed from HTML. Even flash can be inspected at run-time by navigating it's display object hierarchy.

by Joeri at
Alex, thanks for your response. What do you think of XForm's architecture (minus the irritation of XHTML)? You can actually do something like the Mail app example 100% declaratively using it. Although these days, I'd redesign XForms into something based on JSON and/or CSS selectors, maybe JSON Forms, so that can "wire up" HTML elements to data structures via simple data-binding queries. The datagrid/tree example is a good one, what I liked about xforms is that it added a few other things: 1) repeat loops to stamp out templated UI 2) a switch/case UI element 3) terse query language for data-binding 4) vocabulary of declarative events (extended DOM events) to set up simple behavior triggering and 5) ways to assert validation, constraints, and computed fields.

Maybe for HTML6 they can steal some of these ideas. In any case, I think these can work harmoniously with GWT/Dojo/et al.

by Ray Cromwell at

I agree that copius example code can be made available on other platforms, but I'm not sure that I'd agree with "as easily". The web makes it the default. Every other technology makes it a special case. Defaults matter.


by alex at
Great post, thanks. It got me to thinking about how I landed as a web dev... for almost 10 years, my work was primarily done in the old Visual Basic (5/6), where the same rapid development cycle (edit/view/update/refresh/lather/rinse/repeat) holds true.
@alex: Defaults matter, this is true, but I think you overestimate the value of those defaults. In real-world web apps and websites there's so much code that isn't related to the snippet you need, that most of the time it's easier to find a separate example than to dig into some app with firebug to see how they did things. Live code is NOT the same thing as example code (and many web SDK's make this mistake).

I cut my programming teeth on Visual Basic 3, and I doubt I would have learned any faster had VB3 apps offered a view source feature. To be fair, I did learn CSS by looking at existing sites, but then CSS is under no threat from becoming less accessible to inspect, especially with tools like firebug.

But maybe the problem is that I have an app-centric perspective, building web apps, not web sites. Maybe if all someone wants to do is learn how to build static web sites or non-ajax web apps, looking at the source of can teach them a thing or two.

by Joeri at
Thanks for that.

I've given talks on the open web a couple of times, and tried to use the phrase 'lazy text' to describe what's good about view source. That is to say it's not just the ability to view the source, it's also, as you say, the ability to learn and play in a lazy way that counts.

"view source"is a must for not only learning,but for innovation in technology. I think this is the same thing as talking of opensource and closed source technologies. The more its open the more likely it will grow and shape.
by saumya at
Most of what I know now was learned using view source or examples on the web. If you are attempting to do something you haven't done before, the best way to learn it is by seeing how it's done. I'm a huge believer in the copy-paste-tweak method. I have never learned anything truly useful from programming books. They never seem to address the problem I'm trying to answer. The web has always been my reference shelf.
by Renate DeRoch at
HTML/CSS/JavaScript are still immature technologies to me. Look at how fast they evolve: - HTML5 is almost out (geolocation, websockets, etc.) - CSS3, especially with animations, is very promising ( - ECMAScript continues to evolve (

There are so much contributors that following one set of guidelines (from W3C?) as we were learning from Borland with OWL or from Microsoft with MFC does not apply to the (standard) Web application field! And this is without counting the vast amount of wonderful implementations we need to learn from to improve our applications...

Here is a simple situation:

  • You want to forward a link on a public message you posted on Twitter (a URL like:<message_id>)
  • You look for the id of a specific Twitter Public Message.
  • You can:
    1. Use the Twitter API to search for it
    2. Use "view-source" or a DOM inspector to read the identifier of the <li/> tag enclosing the message (something like: status_7449266450)
  • Personally, I prefer the second one ;)
  • You can now copy the identifier and forward the URL:
  • If needed, I can now write a GreaseMonkey script to generate the link automatically.

View-source: a must have! ;) A+, Dom

Maybe View-Source is NOT for everybody and their dog, but as a Debugger and Hot-Spotter it's a tool I can't do without. Maybe View-Source along with some other features should be moved to an "Expert-mode" that can be en-/disabled through Options-Dialog, but not removed. It's a tool. like any other tool it can be abused. Same for a Hammer or a Screwdriver. The tool itself is neither good or bad, but you'll miss it when needed and not available.
by Yamaban at
View source is one of the three most important tools in my web-dev belt (the other two are a text editor and a browser).

I feel that view source is of critical importance to the web both historically and in the future.

I've blogged a more detailed response to your post:

And I would like to start a movement:

Save View Source!

View-Source solves "problems" for web-developers and designers they didn't even know that they had them. It acts as some sort of instant-gratification (of curiosity) as well as being an inspirational source.

I [heart] View-Source.

by KMB at
Great post, Because I can see the code I've learned to create html and css. For me it is an important part of learning.