Comments for View-Source Follow-Up
Providing the uncompressed, unoptimized JavaScript alongside the "production-environment" one is unfortunately only an option. It also doesn't easily allow folks to step through the code, watch events, etc on the live site (unless you provide two versions of the app as well).
Perhaps this isn't a contradiction.
Perhaps as time goes on, I will be able to deliver my plain, commented JavaScript over HTTP and the browser will do the compiling into byte code directly. This might even be another competition point between browsers.
From the declarative side, I would expect XBL, XUL and that type of thing to one day take hold on the web side meaning that the 'lower-level' building blocks of HTML and SVG do kind of fade away.
I talked about this a couple of years ago here: http://www.codedread.com/blog/archives/2005/12/15/putting-the-pieces-together/ Food for thought, anyway.
the reason I ask you to elaborate on your comment regarding browser caching and batching of layout operations is because what I have implemented does include layout caching in the browser. so it is not the case at all that scripting precludes caching of layout operations.
you can download the POC from http://downloads.eforceglobal.com/CSS/CSSScriptLayoutR21525.zip and check out the performance yourself. you can resize the browser and note the resize performance of the container and nested containers in the examples provided. the executable should work on any windows box.
The web is special, in that much of the JavaScript source code from everyone has been available to everyone generally unmolested. Almost all other languages people program in get compiled, and you never see the source code unless it's specifically Open Source.
I'd imagine that even with the advent of more JavaScript code obfuscators (since that's really what they seem to be, adding no functionality, and removing legibility), people will make conscious choices about how "Open" their JavaScript gets to be.
This is not a necessarily bad thing. People have managed to learn C/C++/Java/etc. without all the code out there being available for anyone to see at their whim.
The main problem I see is code developers mistaking obfuscation for security.
And all I do is write complex JavaScript code. When I'm doing something that hasn't been done, the browsers or w3c have good enough docs for me to figure it out.
I think view source fostered the right culture that will be carried on by great frameworks.
-
Browser-delivered apps are more complex. There's no way to fix this. But many of us are about to develop in a thin layer over mature libraries, so that problem is somewhat mitigated.
-
Obfuscation and compression. Do you think the world could use a standard attribute in HTML, to link to the uncompressed source? To somehow give the recipe for the page?
Our colleagues who work on native OS open source applications have always had this problem, and have devised many cultural workarounds, including the GPL. Even then, setting up an editing and building environment for a new open source project is still a tremendous pain.
Saying that "ever-heavier use of JS is shutting many...out of the process" is like saying that Chuck Yaeger shut everybody out of flying on airplanes. The heavy use of JS is trying to solve a different (and complementary) problem.
On the other hand, I'm not convinced that ever-more-complicated HTML markup is the next step either. I'm really excited about some of the cool fx that webkit people are doing with CSS. Or maybe it will be a new markup language entirely.
Not that he’s average, but I believe that’s a big part of how Bill Gates learned to program too — going along to his local university’s computer science department, and reading the print-outs of what they were writing there.
I totally agree that the right solution needs to the tiered process you recommend, with the steps roughly outlined as:
1.) build in script where possible 2.) wait and see what gets popular 3.) translate to native API and ship in browsers 4.) duke out "official" shape of native API at standards body
I don't mean to suggest that there's not any natural place for script, only that if we think "the plan" should continue to require ever-larger piles of script, it'll mean that steps 3 and 4 just aren't happening at an acceptable rate.
Put another way, we need compilers and minifiers only when we need so much script that we can't just write what we mean in a couple hundred lines. When the platform does most of what we need without huge piles of script, the pressure to use tools that defeat view-source (either plugins or huge piles of script) goes down markedly. Maybe that means we need new extension points in the platform ala HTCs, maybe it means we need to be able to write a lot of behavioral code in fewer charachters (the DOM sucks today), and maybe it just means we need new APIs to do common-case stuff.
In any case, I'm not arguing that minifers or compilers are bad. They keep the web competitive until such time as the cavalry shows up with new features in the platform. But they absolutely should have a sell-by date.
Regards
I completely agree with you that view-source is important. It's how I learnt most of my web stuff. However, obfuscation is not necessarily what site owners want, right? Performance and latency are of very high importance, and I wouldn't care two hoots about obfuscation, yet I'd compress. If Closure Compiler can help me improve experience, I'd go with it. If Packer does a better job, that's the way to go.
I don't know how to solve this problem. Even if jQuery, dojo, etc. were bundled with browsers, I'd still compress my code. Maybe the only way to solve this problem is to invent a new mechanism of distributing code without it being dependent on latency of any form. Compression then would have no value, and people would start shipping source, comments and all, without thinking too much about it. I'm not smart enough to see how this could happen, though.
I already run a site where I'm distributing JS without compression since the value of code compression on the little JS there is is so little that I can overlook it. Gzip does the trick well enough. But I can see that it doesn't apply to everyone.
Um, no. By way of background I helped set up the Dojo Foundation because licensing and property rights matter, even for things you can read. The remedies are legal, not technical, though. In compiled languages decompilers can help folks understand what's going on, so if that's your threat, then compiling isn't gonna solve anything.
Now that we're past you reading too deeply into my perspective, lets talk about folks caught in the middle: people trying to just Make It Work (TM). For them, learning from snippets they find on the web in order to learn a technique is invaluable. Poll any average group of web developers and they'll tell you straight-up that that's how they learned. Not be stealing, but by dissecting and inspecting and tweaking and then re-implementing a technique in a new and unique situation. What we're contemplating with ever-heavier use of JS is shutting many of the least experience out of this process entirely. That might be OK in the short run but the long-run effects aren't positive.
Rowan:
I totally agree that this isn't one-or-the-other, but I see two risks: the first is that the web stops being competitive for any given amount of developer effort. Everything that requires JS today is by default harder to do than the same thing if it were provided by CSS or markup. The second risk is that more of the content will go the way of gmail code out of necessity as tools and compilers become part of the assumed toolchain in response to increased expectations about interactivity and UX. I think we've got some years before we're totally hosed here, but we need to act today to make sure things end up in a good place; encouraging browser vendors (like me) to add new native capabilities and ask for forgiveness instead of permission.
Regards
regarding re-democratizing the web, new CSS properties, JS to native API adoption, etc.
Layout is one of the things "that requires JS today [that] is by default harder to do than... if it were provided by CSS."
In several past rants by some folks bear this out although CSS3 helps to some extent.
I have developed a layout spec in CSS that incorporates JS. I know that Alex commented on it before expressing his reservations last July. however, my description and treatment of it at the time was too superficial for an adequate hearing. I have since updated it to a large extent at http://blogs.eforceglobal.com/dkarisch/archive/2009/07/16/536.aspx
I think it not only addresses continued weakness with layout in CSS, but also fits in with the general tenor of the desired qualities that I referred to at the beginning of this post.
I would have preferred that this reply be under the CSS 3: Progress! post from August, but I'm too late for that.
nevertheless, this post is at least somewhat apropos.
alex's comment back in July, the slightlyoff character at http://ajaxian.com/archives/css-scripting-layout, states that "it [layout script] can’t interact with the browser’s (potential) caching and batching of layout operations."
Alex, can you please tell what you mean by this statement?