I've been having a several-day mail, IRC, and twitter discussion with various folks about performance and the feature detection
religion technique, particularly on mobile where CPU ain't free. So what's the debate? I say you shouldn't be running tests in UA's where you can dependably know the answer a-priori.
Wait, what? Why does Alex Russell hate feature testing, kittens, and cute fuzzy ducklings?
I don't. Paul warned me that my approach isn't going to be popular at first glance, but hear me out. My assumptions are as follows:
- Working is better than busted
- Fast is better than slow
- No browser vendor changes the web-facing features in a given version. Evar. Does not happen
If you buy those, then I think we can all get some satisfaction by retracing our steps and asking, seriously, what is the point of feature testing?
Ok, I'll go first: feature testing is motivated by a desire not to be busted, particularly in the face of new versions of UA's which will (hopefully) improve standards support and reduce the need for hacks in the first place. Sensible enough. Why should users wait for a new version of your library just 'cause a new browser was released or because you didn't test on some version of something.
Extra bonus: if you don't mind running them every time, you can write just the feature test and your work is done now and in the future! Awesome! Except some of us do mind. Yes, things are now resilient in the face of new UA's and new versions of old ones, but only on the back of testing for everything you need ever time you load a library on a page. Slowly. Veeerrrrry slooowly.
Paul suggested that some library could use something like Local Storage to cache the results of these tests locally, but this hardly seems like an answer. First, what if the user upgrades their browser? Guess you have to cache and check against the UA string anyway. And what about the cost of going to storage? Paul reported that these tests can be wicked expensive to run at all, on the order of 30ms for the full suite (which you hopefully won't hit...but sheesh). Reported worst-case for has.js is even worse. But apparently going to Local Storage is also expensive. And we're still running all these tests in the fast path the first time anyway. If we think that they're so expensive that we want to cache the results, why don't we think they're so expensive that we don't want to run them in the common case?
Now for a modest proposal: feature tests should only ever be run when you don't know what UA you're running in.
Feature testing libraries should contain pre-built caches -- the kind that come with the library, not the kind that get built on the client -- but they should only be consulted for UA versions that you know you know. If we assume that behavior for UA/version combination never changes, we've got ourselves a get-out-of-jail free card. Libraries can have
O(1) behavior in the common case and in the situations where feature testing would keep you from being busted, you're still not busted.
So what's the cost to this? Frankly, given the size of some of the feature tests I've seen, it's going to be pretty minimal vs. the bloat the feature tests add. All performance work is always a tradeoff, but if your library thinks it's important not to break and to be fast, then I don't see many alternatives. New versions of libraries can continue to update the caches and tests as necessary, keeping the majority of users fast, while at the same time keeping things working in hostile or unknown environments.
Anyhow, if you're a library author or maintainer, please please please consider the costs of feature tests, particularly the sort that mangle DOM and or read-back computed layout values. Going slow hurts users, hurts the web, and hurts the culture of performance that's so critical to keeping the platform a viable contender for the next generation of apps. We owe it to users to go faster.