In my "Standards Heresy" talk I noted pretty bluntly that CSS 3 is a joke. A sad, sick joke being perpetrated by people who clearly don't build actual web apps. If the preponderance of the working group did, we'd already have useful things like behavioral CSS being turned into recommendations and not turds like CSS namespaces and CSS Print Profile. And I'm not even sure if the "Advanced" Layouts cluster-fsck should be mentioned for the fear that more people might actually look at it. You'd expect an "advanced layouts" module to give us hbox and vbox behaviors or a grid layout model or stretching...but no, the "answer" apparently is ascii art. No, I'm not making this up. It's sad commentary that you can propose this kind of dreck at the W3C and get taken seriously.
Beyond what's obviously wrong with the avenues being (inexplicably) pursued, there's a lot to read into what's not being worked on. Namely the serious and myriad problems with the basics of how CSS rules are written and applied.
Lets start from the bottom: CSS 2 should have included inheritance and variable replacement. It didn't. None of the proposed bits of CSS 3 have their wits about them enough to propose a fix of any kind.
In particular, there's no way for a new, uniquely "named" rule to "mix in" previous rules and tweak them with over-rides to fit a particular scenario. It's also not possible to build a "composite class" out of 2 existing classes. You can either split behaviors into a zillion little classes and then try to ensure that they all get applied to the right elements at the right time, or use fewer (more "semantic") classes which are selector oriented and use selective re-definition to attempt to specialize appropriately. This basic inability to factor out common rules which others may then mix in is, in a word, insane.
There's absolutely no reason why, in 2007, we shouldn't be able to write CSS like this:
.highlight {
color: red;
text-decoration: underline;
}
.updated {
@importRule '.highlight';
background-color: yellow;
}
.updatedByOthers {
@importRule '.updated';
color: #3f5070; /* a nice dark blue */
}
As for variable replacement, the above example starts to outline why we need it. Writing themes for any non-trivial system via CSS today is a PITA. Here's how it should look:
@define hlColor red;
@define hlBgColor yellow;
@define oUpdateColor #3f5070;
.highlight {
color: {hlColor};
text-decoration: underline;
}
.updated {
@importRule '.highlight';
background-color: {hlBgColor};
}
.updatedByOthers {
@importRule '.updated';
color: {oUpdateColor}; /* a nice dark blue */
}
Now, updating this for a new "theme" is as simple as moving the color definitions to an external file and changing the location of an @import URL. All our styles remain in-tact and beyond the initial replacements (and event-driven CSS-OM replacements), the performance impact is slight. That the browser vendors have variously taken it upon themselves to expose parameterized values for things like system colors should have told them something important, but alas no.
The recent growth of CSS frameworks is starting to highlight some of the massive failures of CSS and its implementations, but it's not clear how the web development community is going to shake itself awake and start asking for more than proper ":hover" support from IE. And given the pitiful state of CSS 3, it's not clear that we should even ask the W3C for improvements anyway.
This, I think, is an open and important problem: now that the failure of the W3C is all but complete, what organization (WHAT-WG? something new?) could take on the challenge of, um, challenging the browser vendors to build the stuff we need to keep the Open Web viable? And since we can't reasonably expect IE to support things in a timely fashion, do we owe it to ourselves to start building apps for browsers that will give us what we want?
Update: I'm late to this party (as usual). Andy Budd's thoughts on a "CSS 2.2" are worth a read as is Hixie's dissection of the CSS WG's uselessness. David Baron also responds with constructive suggestions.
Thanks to the Ajaxian's for linking my last post on the topic of what we need from IE. As I've been responding to the comments, it occured to me that it's not quite fair to poke IE in the eye when there are issues (like WYSIWYG) where we need the help of all the browser vendors to get something useful done. That in mind, here's my generic Browser.Next list of 10 issues that would give Ajax libraries a break and let app authors worry less.
It should be noted, first, that these issues are designed to be small, (relatively) easily implemented point solutions to accute problems. They are intentionally not on the scale of "add a new tag for..." some feature or "standardize and implement XBL" or even "make renderers pluggable". While those would all be good, the current pace of browser progress makes me think they're beyond the pale.
This list also tries to avoid vendor-specific issues (of which there are a pile, and many of them may be much more pressing on a per-browser basis than the below issues). Lastly, I'm also not asking for standardization of these things in short order (although it's clearly preferable). We DHTML hackers are hearty folk. We'll take whatever they give us, but we could deliver much, much better developer and end-user experiences if only the browser vendors would all give us:
- Event Opacity: we need a way to tell browsers that for some nodes in the z-index stack that events should be passed through to their background. For instance, when implementing drag-and-drop, Ajax libraries have a stark choice: attempt to calculate the locations of all drop targets to enable dragging of the source with offset (expensive hitbox calculations, ahoy!) or move the drag shadow with an offset from the cursor (efficient and what we do in 0.9, but visually unsatisfying). Yes, yes, there are WHATWG specs for drag-and-drop and various browser vendors have implemented them to some extent for some time, but what I'm asking for here is much more constrained: a single API extension that can help us deliver better experiences until all the browsers get DnD right. There are also lots of other use cases where visual decorations should be able to defer their events to underlying elements (think drop shadows, etc.)
- Long-Lived Connections: this was on my IE7 list, but it's still a problem nearly everywhere. The basic issue is that we can't implement Comet reliably because if two tabs both keep a long-lived connection to the same server, no other connections can open up to that server, meaning that normal Ajax (and even style changes) will be blocked. Very often this means that an app will appear to be locked up. One solution is to provide a way to specify in an HTTP header that a connection will be long lived or to provide pages a way to request more concurrent connections be made available to a particular server (on a per-tab basis, and with a hard limit, of course). If feasible, it might just suffice to just break the global lock connection limits and make them per-tab in general. Whatever the solution, we need ways to be able to have multiple tabs each able to create Comet connections without worrying.
- Expose [DontEnum] To Library Authors: The contortions that Ajax toolkit vendors go through to keep iteration over JavaScript objects and primitives coherent is, quite simply, insane. Much of Dojo, in particular, is designed around this problem. Giving us a way to say that a property is [DontEnum], before ES4 lands, would go a long way toward alleviating this back pressure.
- Fast LiveCollection -> Array Transforms: That many DOM apis return live collections is a bug, but it need not be fatal. Browser vendors could start to provide a simple toArray() method on these live collections to provide a way to "fix" them in place.
- Provided A Blessed Cache For Ajax Libraries: Long story short, Ajax libraries are going to be here for a while. The idea that a browser is going to be so good that it will remove the need for a JS library simply doesn't hold water any more, and even if one browser was that good, the other browsers wouldn't be and none of them would be pervasive enough given current upgrade trends. We need to be able to better support our own patches to the browsing platform, and we need the browser vendors to get on board and realize that Ajax toolkits aren't a threat. We can't be...we don't have enough leverage. With all of that being true, it's high time for the browser vendors to provide Ajax toolkit vendors a way to specify a canonical URL or hash scheme which would bypass the network entirely, cross-site. This is something of an extension to the CDN concept for Ajax toolkits, but would go a long way to fundamentally changing the way Ajax toolkits evolve. Instead of fretting about how much 10K on the wire is going to degrade the user experience, we can focus on delivering better and better tool sets, even behind the firewall or offline (where CDN usage isn't feasible).
Obviously, this one is going to require some vendor coordination, but it's the kind of thing where if one vendor does it (well), everyone else should follow quickly without much risk. The Open Ajax Alliance could even function as a body to provide a list of toolkits and hashes to browser vendors should they demure from the task themselves. Lastly, before the flames start rolling in on this topic, I should note that I'm proposing this with some hesitation. Who are we (the Ajax toolkits) to suggest that our content deserves a more "blessed" cache position than site content? I've been wrestling with this for a long time, but now believe that we don't have much of a choice. This solution is good for everyone and while it has the potential to create an uneven playing filed, I think that can be handled at an organizational level.
- Mutation Events: The browsers already know when a new item is added to a DOM, why can't they tell us, the poor toolkit/framework authors? I'm not going to suggest in this list that browser vendors should fully figure out HTML tag subclassing since it will generally require architecture changes for the least capable browsers (*cough* IE *cough*). Instead, point solutions like mutation events everywhere will go a long way to allowing us to further band-aid their brokenness and allow us to more seamlessly upgrade content while we wait for the new-tag cavalry to arrive.
- onLayoutComplete: onDomReady doesn't cut it. Toolkits that want to avoid FOUC when applying behaviors and progressive enhancement to pages are currently attempting to get into the page rendering stream as early as possible. The problem is that for anything that needs to manage layout of widgets on the page, we need to know the dimensions, and that also must mean that CSS has been applied and any initial flow computations have been completed. Obviously, there are issues with progressive rendering of a page, but generally speaking I beleive toolkits are looking to browser vendors for a semantic that is roughly equivalent to "after onDomReady, but potentially before all images have finished loading, inform us when the layout and geometry have stabilized."
- HttpOnly cookies: There's a lot wrong with WebAppSec these days, and the traditional trust boundaries are constantly under attack. Worse, none of the browser vendors seem to feel it's their responsibility to figure out cross-domain or JS sandboxing. This infuriating state of affairs leads directly to my next item, but the minimum any browser vendor should be required to do is to implement HttpOnly cookies. It's no silver bullet, but it's another tool in the toolbox and one we need badly.
- Bundle Gears: Until it's primary APIs are put through the standardization process and introduced in browsers natively, any browser that includes Flash support should, out of good-faith respect for the Open Source and Open Web nature of its intent, bundle Google Gears as soon as it's stable. A commitment to do so in the future will suffice until that time.
- Standardize on the Firebug API's: Firebug provides great debugging and performance profiling APIs. These need to be built in so that we can stop shipping Firebug Lite around the net ad-infinitum (as we do in Dojo 0.9). Being able to have built in timing tic/tock apis and a UI to view it would be a hugely useful. This falls far short of other proposals that have been floated for unified debugging APIs and protocols, but again represents the least vendors can do to alleviate the pain.
So that's my list...what's on yours? What am I forgetting? And how should we organize to ask the vendors for these in a way that will really stick?
It's somewhat inexcusable of me to not have blogged about the release of Open Komodo.
Very few of the Web IDE vendors seem to really "get" the web, and along with the folks at Aptana and Panic, the ActiveState bunch have impressed the hell out of me with their constant support of Open Source, deep understanding of why webdev sucks, and what they can do to fix it.
It's exciting to see Komodo, one of the few editors that has ever tempted me away from vim (even if for short spells), open up and make real steps to being "The Open Web's Eclipse".
Above is the PDF from today's talk. I had a good (but unfortunately truncated) discussion with Aaron Gustafson afterward, and it appears that there are those on the standards advocacy front who understand that those of us who "just make it work" for a living aren't evil and want exactly the same things. Hopefully this will open up a broader discussion (although I suppose that posting something on a blog hardly counts as "discussion").
The state of in-browser WYSIWYG is somewhere between pitiful and mind-numbingly painful. Opera and Safari have pulled themselves up by the bootstraps and soon all the major browsers will be at the same level of awful, more or less. This area of DHTML doesn't get much love from browser vendors in part because only the heartiest souls ever venture into these deep, shark-infested waters so there aren't many people clamoring for fixes to the underlying capabilities. Everyone sane just picks up the Dojo Editor, TinyMCE, or one of the other good editing packages that are available.
Since recently delving back into the Dojo editor for the 0.9 release I've been chewing on the problem some more, and I think the solution is fairly simple in terms of the APIs which toolkit authors should expect of browser vendors. The goals of editing generally boil down to:
- Allow users to apply formatting to stuff they have
- Let users add new stuff
- Allow users to undo stuff they did
- Serialize the stuff users have done and, optionally, the undo stack
The current contentEditable/designMode systems fail in the undo case because (particularly on IE) it's not possible to denote what is and isn't an "action" that the user is doing, nor can you be informed by the browser when it pickles off a new state to the undo stack. This means that the undo stack captures things which aren't changes in your editing area and may appear to be "broken" by UI feedback that you provide to users in other ways.
Further, the existing system's dependence on pseudo-magical "commands" makes nearly zero sense. Every editing component worth its salt today has to build its own ways of executing DOM manipulation and then rolling back from change sets. Browsers half-coddle editing system authors when it would be better if they just got out of the way and gave us APIs which are suited to the "build the entire UI in javascript" path which everyone already takes anyway.
Since it's not really reasonable to expect that browsers will remove contentEditable
, here are my proposed APi additions to it which would allow any sane editing UI to ditch the entire command structure which can slowly fade into the background over time.
editableElement.openUndoTransaction(callbackHandler)
: starts an undo transaction, implicitly closing any previously opened transactions. All subsequent DOM manipulation to elements which are children of this element will be considered part of the transaction and normal browser-delimited undo transaction creation is suspended until the transaction is closed. The optional callback handler is fired when the user cycles back this far in the undo stack from some future state.
editableElement.closeUndoTransaction()
: ends a manual undo transaction. Implicitly called by openUndoTransaction. Closing the transaction has the effect of pushing the current DOM state (or whatever differential representation the browser may use internally) onto the browser's undo stack for this editable element. When an undo transaction is closed, browsers may resume automated generation of undo states on the stack intermingled with the manually added states.
- Support for non-standard DOM positioning properties of range objects as outlined in MSDN
These APIs added to elements with contentEditable
set will allow us to use regular-old DOM methods of manipulating user selections and adding complex content from user input without fighting for control of the undo stack or inventing our own (which has so many problems that I don't want to begin to address them). Additionally, this method of manipulation will allow toolkit authors to deliver editors which operate on the semantics of the markup more easily.
Note that we suppose the current uneven level of Range and DOM APIs will persist over time, and some things may get easier over time in conjunction with these APIs as those problems are slowly alleviated. Additionally, interaction with the global undo stack for the browser is as-yet unspecified. I'm inclined to suggest that unless the editable element has focus, undo should not affect it but my unfamiliarity with the implementation of the global undo stacks in browsers may nix that and require a broader solution. There may also need to be methods for ignoring a particular set of DOM operations (say, from event handlers) to prevent browsers from taking snapshots at bad times, but I think we can ignore that for now.
Lastly, there is probably room for an API to register interest in any undo operation and to push things onto the browser's undo stack for non-editing elements, but this API solves the problem where it is most accute today.
Older Posts
Newer Posts