Comments for View-Source Is Good? Discuss.
It's invaluable to those of us new to html or just plain curious as to how something we like on the web was made.
I think that we've come to expect far too little of browsers, but for good reasons. You and I have both spent some big chunk of the last decade stepping into the breach when browsers collectively suffered a massive market failure and a subsequent failure of imagination that came with lowered expectations. There was a time in the late 90's when web browsers shipped new versions so fast that web developers couldn't keep up, when new and seemingly daunting features got multiple implementations in what now feels like no time at all. All of that progress depended on "seldom-updated C++". I think we need to focus more on what "seldom" means and what turns seldom into frequent.
It's pretty clear that at the functioning end of the browser market, we've got browsers that are moving MUCH faster than they were even 2 years ago. Nobody, more or less, is still using Firefox 2, Safari 3, or Chrome 2. The same can't be said for IE 6. So our problem then isn't that it's C++, it's that the market is broken for getting certain users to upgrade, which is both in their own interests and in the interests of the folks who would like to develop better sites for them. I have real hope that we can crack that nut with Chrome Frame. Most of the market works, we just need to address the parts that aren't moving and do it in ways that aren't traumatic for people. Dojo and GWT have been good intermediate steps in the interim. Even if GCF doesn't succeed, the dynamics will be the same as they have been: getting to a world where tools like Dojo and GWT can perform better (relatively speaking) is going to require that non-JITing VM's get retired. The only way we there there is through "seldom-updated C++". The price of progress is the same either way, and I find it odd that folks who are effectively performing the role of browser intercessor are unwilling to acknowledge as much. There will be much gnashing of teeth in the JS toolkit world as progress makes them obsolete. The fear is understandable: ceding control back to a pile of code you don't control always has risks. I, however, can't freaking wait.
As for the features themselves, I'm not asking for anything more than good toolkits already provide. In some cases, it's stuff that the browsers have laying around; e.g. XUL already has most of this. Anyway, we don't really even need "good" solutions, we only need workable ones with enough extension points to let scripters fill in the gaps. The web has proven that much over and over. Just ask a print designer if they could live without real style sheets, a true WYSIWYG design environment, their custom font collection, and precise layout control. They were wrong too. So it goes.
Regards
Thanks for the clarification. I didn't mean to imply you were impugning our work, but rather wanted to clarify exactly which two world-views are being juxtaposed here, and suggest that these views are orthogonal to the particular tools we use.
I think I see your point now. To make sure I understand the design you're sketching out here -- we're not talking about shipping all data "in the static page", but rather through some sort of standard interface for retrieving data post-load. So we would need standards for the data format, as well as for the "controls" (tables, trees, what-have-you) for rendering said data. And you'd need something like form-posts-on-steroids for gathering data from the user and updating the states (such as queries) of these data-bound controls. This would be a sea-change from the way browsers currently work, but it certainly doesn't seem intractable.
Where I become rather more bearish about this scenario is when we start talking about trying to bake these sorts of standards into the browser, presumably in seldom-updated C++ code. It's hard to prove this, but my intuition tells me that it would be difficult or impossible in practice to agree on sufficiently general designs for these common widgets, especially if they progress at the "speed of browser update".
Perhaps we'd be much better off trying to agree on definitions for the data formats we use, the mechanisms for retrieving them, and the semantics of their structures. This seems imminently more tractable, and I believe would have precisely the same effect on crawlability (to be clear, this presents opportunities for spamming if there's any code involved, but that's a problem either way).
Cheers, joel.
Of course, view source can be helpful to see what the server really delivered, but you could easily hide your whole webpage to the eyes of CTRL-u by loading it via JS. You can't hide it to those developer tools.
Greetings, LX
If a browser chooses not to implement a view-source option, how is that a bad thing? I personally won't be using that browser, but depending on it's popularity with the masses I would develop for it.
I do agree that all browsers that would like to be taken seriously need to be standards compliant and have high powered debugging resources such as firebug and the like (this means you IE!).
Now if the argument was to be over obfuscating source or making browsers function on binary data, I would have many, many arguments with that... I enjoy reading others code and it makes me more conscious of my own coding style knowing it can be viewed by others.
-Elliott
Just to be clear, I wasn't meaning to dump on or impugn GWT in any way. Yes, GWT can be used for progressive enhancement. So can the Closure compiler. So can Dojo. My point here is that to varying extents, they all defeat view-source because they assume that code will be necessary to deliver behavior. As you note, NaCL is just the most extreme version of this.
There's a premise to your response that apps (not pages) will continue to need large gobs of code in order to deliver anything like a usable UI, and it's that assumption that I'm trying to question. It's not about progressive enhancement or language, it's about how much of the page is "invisible" to inspection. I view big Dijit-based apps, GWT apps, and NaCL-driven apps as indistinguishable in this respect.
Consider an alternate future; one where there's a way to declare a standard data source, hook it up to a <datagrid> or a <tree> declaratively, and have common-case rules for most editing and UI idioms built into the browser with extension points via the DOM. That's a spiderable outcome, and one that will bless apps that use it with a massive performance win: you'll no longer need to ship down the definitions for your grid, tree, or data source to the client. Yes, some code will be necessary for custom behavior or apps so huge that they blow out the limits of even an upgraded platform's semantics, but I think that it's possible that many types of apps could avoid either plugins or compilers to JS were the platform to get this capable. The Dijit mail demo could certainly be done without much code were that world available today.
In regard to learning, view-source has pretty much been my only teacher though my page-site-only skills are negligible in present company.
Kind of surprised that you're kind of surprised about "some evidence" that copy-paste-tweak is faster than from scratch. Them UseIt people learned me that in 2000
Cheers.
Secondly, I would say that in the long term (next decade or two), it will become increasingly difficult to force every programmer in the world to write their apps in Javascript in a text editor, so it is better to accept the fact that people will use other languages and compilers, translators, or code-gen tools, and design mechanisms around this to preserve "view source" than to fight tooth and nail for Javascript. I dunno, whenever this discussion comes up, I feel like there's an underlying language war aspect.
This only gets worse with stuff like NaCL and other extensions various vendors are adding to the Web. I would like to propose another panel, which is to discuss the role of the browser as general purpose execution environment (e.g. Chrome OS, NaCL, et al), how that relates to developer expectations towards languages and tools. With more and more of the world's information economy and services moving to the web, I think it's unreasonable to expect mono-culture in tools and methods.
One can however have agreed upon best practices. Browsers, for example, can still support "view source" even if they're looking at obfuscated JS for execution. There's no a priori reason that what is downloaded and executed has to be the same as what is shown in View Source. One could theoretically support "View Source" for C++ code used to generate a native client executable.
I feel like this is a legitimate discussion, but it's somewhat mischaracterized. We all (I believe) agree on the importance of the web as a platform. The tension is between two different methods of site construction -- page-like and application-like, for wont of better terms.
Page-like sites tend to be content-oriented. And for these, straightforward markup, potentially with a bit of js to spiff them up, is a sensible approach. This is what we often tend to think of as "the web", and I believe is what you're defending here. I don't disagree.
Application-like sites tend to be, well, applications. They're complex interactive UIs that work without page reloads. While this may not be the right approach for most "sites", it's absolutely invaluable for things like Gmail, Maps, Calendar, Docs, Spreadsheets, AdWords, Wave, and many other apps. Semantic markup with simple script is entirely inappropriate in these cases.
Most tools, be they GWT, Dojo, Closure, jQuery, or what have you, can be used either way. The decision has little to do with your tool chain, and everything to do with the kind of site you're building. To wit -- Dojo's mail demo (http://demos.dojotoolkit.org/demos/mail/) is constructed in "application style" (i.e., it's entirely programmatically constructed and contains zero semantic information in its source) -- but Dojo certainly doesn't require that you work that way, any more than other tools do. That's not a criticism of Dojo, though. Dojo's design clearly recognizes that both approaches are sensible.
As to the evolutionary forces enabled by "view source", I couldn't agree more. But as applications become more complex, we need more powerful tools to achieve the same effect. And I think the best known solution at present is to aggressively open-source whenever possible. While this remains an uphill battle at many companies, I'm seeing more and more companies that recognize the value of releasing their code. It might require more work, but I believe that it's the only viable long-term solution to the kind of information sharing that made the web what it is today.
First, I agree that the problem isn't that browsers are implemented in C++. The problem is that they are updated slowly. So let's start by ignoring IE for the moment, on the assumption that Chrome Frame can deal with that problem. So for each batch of new features, we need only wait on Chrome, Safari, Firefox, and (perhaps) Opera to be updated. And Safari Mobile. And the Android browser, along with perhaps Opera Mobile and whatever forked WebKit Nokia is shipping these days. So while that's a relatively frequently-updated group of browsers, I don't think it's unreasonable to say that there's a minimum of one year lag time between specification (or first implementation) and broad penetration. And by broad, I'm talking about ~95% of the market -- which of course implies breaking 1 out of 20 customers. These rough numbers are certainly debatable, but I don't think they're off by a huge amount.
So what's the big deal? Around a year doesn't seem too bad. But there's another problem lurking, which gets to the root of my concern: The way I read your suggestion, the "features" we're talking about are high-level semantics, not low-level building blocks. Let's say HTML 5.x gets a "data grid" element. Then that hits most browsers a year or so from now. I guarantee that it will be insufficiently generalized for some common use case, because that's an exceedingly difficult problem. So I can't really use it yet, or at the least it requires a lot of hacking to work around the things it doesn't do well. I push for changes, which get adopted at some point, followed by another year or so of waiting on browsers to be updated.
Even if this particular element eventually, asymptotically, meets my needs, we're designing a system by adding more and more special cases, which I would argue is a recipe for disaster. What we should be doing is building the basic blocks upon which a higher-level system can be sensibly built. Special cases (all the wacky ones like border-image in CSS3 come to mind) that can be specified in terms of lower-level primitives belong in library code, not baked into the browser.
In a nutshell, I suppose what I'm arguing is that you can't build a complex system by piling on special-cases, and that you can't escape code for non-trivial applications. I believe that interoperability (be it for crawling, data transfer, or what have you) is best handled by different standards, not by forcing developers to conflate their UI with their data. The web-as-static-pages has worked well thus far, but it's starting to be strained pretty badly, and I'm unwilling to bet the future on a spiffed-up 3270 terminal :)
Cheers, joel.
There's a big difference between HTML view source and DOM inspection. HTML view source is not useful most of the time, while DOM inspection is definitely useful. DOM inspection tells you how they did things, not how the page was initially received by the browser. You don't need a markup-driven delivery system for that. A DOM constructed from javascript is just as inspectable as one constructed from HTML. Even flash can be inspected at run-time by navigating it's display object hierarchy.
Maybe for HTML6 they can steal some of these ideas. In any case, I think these can work harmoniously with GWT/Dojo/et al.
I agree that copius example code can be made available on other platforms, but I'm not sure that I'd agree with "as easily". The web makes it the default. Every other technology makes it a special case. Defaults matter.
Regards
I cut my programming teeth on Visual Basic 3, and I doubt I would have learned any faster had VB3 apps offered a view source feature. To be fair, I did learn CSS by looking at existing sites, but then CSS is under no threat from becoming less accessible to inspect, especially with tools like firebug.
But maybe the problem is that I have an app-centric perspective, building web apps, not web sites. Maybe if all someone wants to do is learn how to build static web sites or non-ajax web apps, looking at the source of amazon.com can teach them a thing or two.
I've given talks on the open web a couple of times, and tried to use the phrase 'lazy text' to describe what's good about view source. That is to say it's not just the ability to view the source, it's also, as you say, the ability to learn and play in a lazy way that counts.
There are so much contributors that following one set of guidelines (from W3C?) as we were learning from Borland with OWL or from Microsoft with MFC does not apply to the (standard) Web application field! And this is without counting the vast amount of wonderful implementations we need to learn from to improve our applications...
Here is a simple situation:
- You want to forward a link on a public message you posted on Twitter (a URL like: http://twitter.com/domderrien/statuses/<message_id>)
- You look for the id of a specific Twitter Public Message.
- You can:
- Use the Twitter API to search for it
- Use "view-source" or a DOM inspector to read the identifier of the <li/> tag enclosing the message (something like: status_7449266450)
- Personally, I prefer the second one ;)
- You can now copy the identifier and forward the URL: http://twitter.com/domderrien/statuses/7449266450.
- If needed, I can now write a GreaseMonkey script to generate the link automatically.
View-source: a must have! ;) A+, Dom
I feel that view source is of critical importance to the web both historically and in the future.
I've blogged a more detailed response to your post: http://thomasjbradley.ca/blog/save-view-source
And I would like to start a movement:
Save View Source! http://saveviewsource.org
I [heart] View-Source.
Thanks for taking the time to respond. Sorry for being so late in replying.
So I understand and acknowledge your concern, but I'd submit that it's borne of constraints and requirements that aren't common. Indeed, you're making a compelling case that the web badly serves uncommon-cases, and for what it's worth, I tend to agree. For the special cases, tools should absolutely continue to fill the gap and I think there's a bright future for the Dojo's and the GWT's of the world in that niche. Where I part ways with your perspective is how it prescribes how future work should be prioritized.
You're (maybe implicitly) arguing that browser vendors should spend more time making it possible to handle all edge cases in lieu of building affordances for common cases. This is a code-centric view of the world, and one I strongly disagree with. Folks who can code are already enfranchised. They barely need the help, and when they do, it tends to be a question of taste and effort, not of possibilities. Yes, their opportunities to express their ideal is constrained, but that's the price of a truly ubiquitous runtime.
In any case, I argue it's not worth abandoning the benefits of view-source to get there anyway. Big, knobby, semantic-ish forms of agreement * explicitly* require that we give things up in order to get the benefits of searchability, reliability, and ubiquity that HTML has provided. Put another way, there are a lot of reasons that the era of C++ desktop apps is drawing to a close, but chief among them is that the price of the flexibility was so high that it had the net effect of stunting the pool of participants. That's a losing hand in a world where we can keep throwing more transistors at the problem, year over year.
Regards