Cutting The Interrogation Short

I’ve been having a several-day mail, IRC, and twitter discussion with various folks about performance and the feature detection religion technique, particularly on mobile where CPU ain’t free. So what’s the debate? I say you shouldn’t be running tests in UA’s where you can dependably know the answer a-priori.

Wait, what? Why does Alex Russell hate feature testing, kittens, and cute fuzzy ducklings?

I don’t. Paul warned me that my approach isn’t going to be popular at first glance, but hear me out. My assumptions are as follows:

  • Working is better than busted
  • Fast is better than slow
  • No browser vendor changes the web-facing features in a given version. Evar. Does not happen

If you buy those, then I think we can all get some satisfaction by retracing our steps and asking, seriously, what is the point of feature testing?

Ok, I’ll go first: feature testing is motivated by a desire not to be busted, particularly in the face of new versions of UA’s which will (hopefully) improve standards support and reduce the need for hacks in the first place. Sensible enough. Why should users wait for a new version of your library just ’cause a new browser was released or because you didn’t test on some version of something.

Extra bonus: if you don’t mind running them every time, you can write just the feature test and your work is done now and in the future! Awesome! Except some of us do mind. Yes, things are now resilient in the face of new UA’s and new versions of old ones, but only on the back of testing for everything you need ever time you load a library on a page. Slowly. Veeerrrrry slooowly.

Paul suggested that some library could use something like Local Storage to cache the results of these tests locally, but this hardly seems like an answer. First, what if the user upgrades their browser? Guess you have to cache and check against the UA string anyway. And what about the cost of going to storage? Paul reported that these tests can be wicked expensive to run at all, on the order of 30ms for the full suite (which you hopefully won’t hit…but sheesh). Reported worst-case for has.js is even worse. But apparently going to Local Storage is also expensive. And we’re still running all these tests in the fast path the first time anyway. If we think that they’re so expensive that we want to cache the results, why don’t we think they’re so expensive that we don’t want to run them in the common case?

Now for a modest proposal: feature tests should only ever be run when you don’t know what UA you’re running in.

Feature testing libraries should contain pre-built caches — the kind that come with the library, not the kind that get built on the client — but they should only be consulted for UA versions that you know you know. If we assume that behavior for UA/version combination never changes, we’ve got ourselves a get-out-of-jail free card. Libraries can have O(1) behavior in the common case and in the situations where feature testing would keep you from being busted, you’re still not busted.

So what’s the cost to this? Frankly, given the size of some of the feature tests I’ve seen, it’s going to be pretty minimal vs. the bloat the feature tests add. All performance work is always a tradeoff, but if your library thinks it’s important not to break and to be fast, then I don’t see many alternatives. New versions of libraries can continue to update the caches and tests as necessary, keeping the majority of users fast, while at the same time keeping things working in hostile or unknown environments.

Anyhow, if you’re a library author or maintainer, please please please consider the costs of feature tests, particularly the sort that mangle DOM and or read-back computed layout values. Going slow hurts users, hurts the web, and hurts the culture of performance that’s so critical to keeping the platform a viable contender for the next generation of apps. We owe it to users to go faster.

A quick aside: I hesitated writing this for the same reasons that Paul cautioned me about how unpopular this was going to be: there’s a whole lot of know-nothing advocacy that’s still happening in the JS/webdev/design world these days, and it annoys me to no end. I’m not sure how our community got so religious and fact-disoriented, but it has got to stop. If you read this and your takeaway was “Alex Russell is against feature testing”, then you’re part of the problem. Think of it like a feature test for bogosity. Did you pass? If so, congrats, and thanks for being part of the bits of the JavaScript universe that I like.

46 Comments

  1. Posted January 30, 2011 at 6:02 pm | Permalink

    Don’t forget there can be a large (maybe larger in many situations) overhead for actually downloading these tests. If test isn’t used, it shouldn’t be delivered to the browser, much less executed.

    FWIW, I think the right way to go about feature detection is to use feature-based branching in source code/modules, and create user agent based layers at build time (that have a set of known features that can be used to eliminate tests and unused branches) that we can branch to for the built applications. This allows module code to survive well into the future, while applications can easily rerun a build with the latest user agent information. I believe we are getting all the infrastructure in place for this well in Dojo/RequireJS/has.js. We just need to make sure we put it together properly.

  2. Nathan Toone
    Posted January 30, 2011 at 6:36 pm | Permalink

    Amen!

  3. Posted January 30, 2011 at 6:40 pm | Permalink

    I KNEW IT! I KNEW IT! I KNEW you hated babies and apple pie!!!

    -C

  4. Posted January 30, 2011 at 6:49 pm | Permalink

    > I say you shouldn’t be running tests in UA’s where you can dependably know the answer a-priori.

    I wholeheartedly agree here, in certain circumstances. If I’m writing JS for an iTunes LP, I know I’m in some flavor of Safari Webkit and as such, I can use webkit prefixed CSS gradients and 3D transitions, etc. If I’m writing an extension for Opera, I know I’m in at least version 11, so I can get away with using the document.head DOM accessor or DOM3 Custom Events, etc.

    That’s all fine and good, no need to detect any feature. Seeing people use libraries like jQuery in these same situations makes me cringe a bit knowing all the jQuery.support.* tests are running and all the unused code paths are just sitting there.

    Putting these “special” contexts aside for a moment, and shifting to just your server and my UA playing HTTP ping pong, how else might we know a priori the capabilities of my UA? It’s obvious by now that (server-side or client-side) UA sniffing won’t do–there are countless examples of people making too many assumptions to only screw over the end user, i.e.sites sniffing for ‘opera’ and sending WAP content to Opera desktop, Grooveshark thinking Chrome 10 is Chrome 1 and not allowing me into the fancy HTML5 site, etc.).

  5. Posted January 30, 2011 at 7:57 pm | Permalink

    Sounds reasonable to me. Don’t let the haters get you down, Alex.

  6. Posted January 30, 2011 at 8:40 pm | Permalink

    I guess I’m part of the fervent masses who went on the defensive for feature detection. ;)

    I actually consider myself pretty open, and my biggest problem from the Day of JS on Mobile panel was just wanting to see more reasonable discussion on this topic. I’m glad you clarified your position here. You essentially seem to be offering two options: 1) no feature detection (you know the UA and its available features in advance) or 2) cached feature detection. This seems quite reasonable and sensible.

    Here’s how I understand these options:

    1. no feature detection – this option seems to me to be available only for those building for one browser. There’s no question about the features available, so feature detection is unnecessary. This is reasonable, so long as you’re developing for just one UA. The second you’re developing for two dissimilar UA who have features that don’t map onto each other, this option seems not to make sense because it involves forking in the code, based on either features or user agent strings.

    2. cached feature detection – available features aren’t tested each time, but are cached in some way. They can be cached by UA sniffing on the server (as Kris suggests above), which delivers only the necessary JavaScript to the client. Or the UA sniffing might be hardcoded into a JS library itself. This seems to be the solution that would make sense for most folks.

    If there’s a significant performance gain to be made, I’m not sure how anyone could be against feature caching on the server or the client. In other words, I think most people would agree with you.

    Let me know if I’m off base here on anything. I think most of the debate might be because there’s so much misunderstanding and lack of discussion and talking past each other. Everyone is a fan of making things go faster. There just needs to be more conversation!

  7. James Downs
    Posted January 30, 2011 at 9:41 pm | Permalink

    The interesting thing about what you are suggesting is that by around 2000, the mobile world was doing this. It was absolutely critical then, because different phones had different markup languages, and different mappings for buttons, and wildly different capabilities. This also let us target asset types like 1bit, B&W, Color, Jpeg/Gif, Flash, depending on the device.

    Mike’s comment that UA sniffing won’t do it is just an indication that people are doing it *incorrectly*. It was being done correctly 10 years ago.

    In any case, I think this is precisely the way to do it. A database of capabilities.

  8. Posted January 30, 2011 at 11:45 pm | Permalink

    Chris: you should hear my views on motherhood sometime. I’m bad people.

  9. Posted January 31, 2011 at 1:37 am | Permalink

    I’m gonna echo Mike’s sentiments.

    It’s a nice line of thought, and I’m all for performance, but there is simply no way to reliably detect the current user agent. It can easily be spoofed — and that’s a good thing since sometimes you just have to pretend to use another browser to get a website to work properly (cfr. the cases Mike outlined). It shouldn’t be like that, but it is because people like to write broken UA sniffers.

    To get back to your proposal:

    Now for a modest proposal: feature tests should only ever be run when you don’t know what UA you’re running in.

    So when do you know what UA you’re running in? By just sniffing the UA string, you can never be sure.

  10. Posted January 31, 2011 at 2:43 am | Permalink

    Having UA-optimized libraries and/or feature-test caches presupposes that the UA and its capability profile is reliably knowable. Isn’t that how we got here in the first place? That matrix gets so complex that while you can probably draw up a short list of known environments to target, if you are building for anything more than the big (3,4, 5 etc) it quickly becomes impractical.

    I guess if you have a standard feature test suite and can shortcut some/all of those tests for environments that you can truly positively identify, but fallback to allowing the tests to load and run, you have something like the best of both worlds. Its the “truly positively identify” bit and the maintenance that involves that gives me the willies.

  11. Posted January 31, 2011 at 4:45 am | Permalink

    > “Mike’s comment that UA sniffing won’t do it is just an indication that people are doing it *incorrectly*. It was being done correctly 10 years ago”

    Really? Ever heard of IE6? UserAgent sniffing is the single-most critical reason we had to deal with IE6 for seven+ years. UserAgent sniffing is the entire reason why so many corporate intranets forbid people to upgrade their browsers, or even their OS. UserAgent sniffing has been THE biggest detriment to progress of the Web as a technology platform.

    In other words, UA sniffing makes you the John McCain of progress.

    > “No browser vendor changes the web-facing features in a given version. Evar. Does not happen.”

    Except, it sorta does. Ever since Chrome came out, we’ve seen this be the case. And we have examples going back much further than that, especially on mobile, but I digress.

    These days, almost every decent smartphone ships with a mobile browser based on Webkit. But they’re all different, and while Apple usually leads the pack with mobileSafari in terms of supported technologies, all the other vendors typically ship a version of Webkit that has some of its features (improperly) stripped.

    Good overview of this situation here:
    http://www.quirksmode.org/webkit.html

    > “If you buy those …”

    Except I won’t buy those [claims], because having made Modernizr I know for a fact that those claims are incorrect.

    > “feature tests should only ever be run when you don’t know what UA you’re running in.”

    You never know what UA you’re running in. The whole problem with differences between browsers has been multiplied and compounded by the fact that people started doing UA sniffing as a means to combat that. But UAs have historically been a completely unreliable measure, for ANYTHING:

    http://webaim.org/blog/user-agent-string-history/

    You want to know what UA you’re running in? Fine, even if you exclude the legitimate practice of UA spoofing, you MUST include the entirety of this table in your library if you want to know what UA you’re in:

    http://www.zytrax.com/tech/web/browser_ids.htm

    Warning: this is their comment at the top of the page:
    “This page was getting big – we’re talking big. So we split the mobile things onto a separate page.”

    Whoops, you also have to include the entirety of *this* page:

    http://www.zytrax.com/tech/web/mobile_ids.html

    And if you want to make the argument that, “no you don’t have to include the ENTIRE table”, I will point out to you that it is virtually impossible to do 100% proper, accurate UA sniffing by using regex patterns that also produce accurate feature-capability results, ergo, you will almost certainly be testing for a SUBSET of the great range of UA strings on the Internet, ergo, you’re doing UA sniffing wrong. Ergo, your entire premise comes to a screeching halt.

    You say feature-detection hurts the web? It sure as hell doesn’t hurt the web anywhere near as badly as UA sniffing has hurt the web for FOURTEEN YEARS.

    All UA sniffing is poor UA sniffing, and poor UA sniffing has hurt the web deeply since nearly the dawn of browsers. Advocating that practice to continue is far, far more damaging to progress on the web than a couple of milliseconds are.

  12. Posted January 31, 2011 at 6:48 am | Permalink

    I’ve been working on a large project following an extreme feature-based approach (as embedjs follows) and it has been more than relieving to program using this paradigm. The codebase was targeting both desktop browsers and different mobile devices (touch).

    The main benefit I see directly impacting development quality is the cleanliness and structuredness of the code. There is no branching based on what is supported by a platform, only features and proper abstraction. Finding and fixing bugs has been so much easier since there was no deep dive into branches and not understandable conditionals.

    There clearly is a challenging issue, the one of what do you do if a browser comes along which you didn’t keep in mind (as Sam stated) – though I think it will benefit us more if we try to solve that issue and have a clean codebase rather than the other way around.

    @sam, I agree the identification part is tricky, though again speaking of this particular project (it is by no means the standard though) my experience was that browsers such as ff, s, c, o where pretty much on a level where feature based abstraction had to happen on a very limited set of apis (we did not target ie :) ), so we did not have insane lists of user-agents to go through – moving over to mobile, same story – the bigger challenge was to handle different interaction paradigms, having clean apis for context menu vs. tab+hold – and in this scenario it was a blessing.

    Maybe we need to look at the whole issue even more from a UI perspective than code-size and performance (even though this is extremely important) – different devices will require different UIs and will have different input/interaction paradigms – with the so often used in-code feature detection we will build massively bloated and unmaintainable apps trying to target different devices..

    Mobile phones are just te beginning…

  13. Sebastian Werner
    Posted January 31, 2011 at 6:53 am | Permalink

    I like the idea. In Jasy (http://github.com/unify/jasy) – my new tooling/build system for JavaScript projects I already only compile in tests which are asked for by the app developer. Something like hasjs but in ultra-modular and integrated deeply with the dependency system. One thing which might be cool would be to add browser specific data to every class which can lead to a fast-path lookup instead of testing. That’s not worth it for simple tests, but as you have written, might make sense for DOM related tests etc. Especially on mobile.

  14. Posted January 31, 2011 at 8:08 am | Permalink

    Faruk: I’m afraid you’re simply wrong about UA+version breaks in Chrome. New versions of WebKit get shipped with the same Chrome versions only on beta + dev, never on stable.

    If you want to make the case that the development version of a browser should be stable…well…hrm.

    As for calling other “the John McCain of progress”, please know that your comments survive here only so long as you remain civil.

  15. Posted January 31, 2011 at 8:44 am | Permalink

    Alex,

    Fair point, I apologize for that comment, it was out of line.

    As for the UA+Chrome thing, though, I’m not wrong: when Chrome first came onto the scene, no one’s UA sniffing scripts would have included it because it was a brand new browser. So, the only way they would have detected it properly was if they detected for WebKit, which is not unreasonable. However, if they then—again, not unreasonably—assumed that because it was Webkit, the browser supported RGBA and opacity, they would have been gravely mistaken, because Chrome 1 did not support anything with alpha levels.

    We’re seeing this same scenario play out over and over and over again on mobile *right now*. Every time an Android device gets made, it comes with a different Webkit browser, often with a new UA string, and sadly-too-often with yet another _different_ set of features actually supported and implemented.

    So no, I’m not talking about the “development version” of browsers. I’m talking about the actual, shipping-to-consumer versions that I’ve been testing and researching for over ten years now. And across all of those eleven years, I have seen the fallacy of UA sniffing and the inherently flawed assumptions that practice entails. As Jon Snook said, “[UA sniffing] is akin to asking but accepting the answer as truth and not testing the validity of the answer.”

    At least with a proper feature detection library, we’re striving constantly at verifying the validity of the claims made by browsers, because _we know_ that browsers lie. They lie _all the time_, that’s why feature-detection is so important: to minimize the assumptions we need to make in order to make future-proof, not-web-breaking compromises and decisions when designing and building websites. UA sniffing scoffs at doing things right, and relies on _nothing but_ assumptions.

  16. Posted January 31, 2011 at 9:20 am | Permalink

    I’m split. Faruk makes points that I agree with. UA sniffing is catastrophic for the open Web because people don’t bother do it in a clever way (budgets, timelines, etc.) A browser has a market share of 1%, let’s not bother with identifying its version at all and let’s exclude it. I understand that you are proposing as a fallback mechanism feature detection. In the desktop world, it *might be* ok, in the mobile world, I have the feeling we are entering into very wild territories.

    But before going to that road what is the right level of granularity for version detection. Basically what do you call *versions*? :)

    Check also
    http://my.opera.com/karlcow/blog/index.dml/tag/opentheweb

  17. Posted January 31, 2011 at 9:28 am | Permalink

    Faruk:

    You’re inadvertently making my point for me, albiet through a leap of logic that I’m trying to avoid. You make it and then impute it to me, which really isn’t very sporting, as arguments go. You (not I) have conflated renderer versions (webkit revisions, whatever) with UA versions — or you’ve assumed that by “UA version” I meant something other than “UA version”. I assure you I did not = )

    Browser vendors ship stable UA’s with particular versions. The features in them don’t change. I’m suggesting we hook our low-water-mark detection on ONLY UA + VERSION, not presence of “webkit”, renderer version, or any other sort of signifier. It keeps us out of jail in the “I think this is like that, but I’m not sure” case that you’re so worried about.

    Please re-read my post with that distinction in mind. I think you’ll find we share the same goals.

  18. Posted January 31, 2011 at 10:14 am | Permalink

    Alex,

    Please provide an example of what you think is a safe “ONLY UA + VERSION” string that will accurately represent a set of features. Because I can’t come up with one, without omitting the vast majority of the 637 different known UA strings on the Internet today.

  19. Posted January 31, 2011 at 10:35 am | Permalink

    At some point it boils down to – are there valid reasons to be able to detect the UA? I don’t believe there’s a single JS library that doesn’t have some bit of UA detection. On browserscope.org and jsperf for instance we really want to get it right, and good gawd it’s nigh impossible today. Should we not be able to have sites like this?

    I think this is one of those simple, elegant ideas that no one’s had the nards to support in light of “best practice” and “past failures” rhetoric. Sure there are reasons why UA detection can suck (browser wars/competition, etc..) – but an overwhelming one was that it was done poorly because string matching is easy to get wrong.

    With “compatibility” modes and things like that I wonder if it would need to split that way (i.e. would the IE8 in “compatibility” mode be a different guid that IE8?), so it’s my only concern with a guid.

    Bad code is bad code. Device and UA matter to developers. If we can make detection easy I believe we’d avoid some of the causes of pitfalls in the past. We could still allow UAs to send bogus guids, but if by default they sent something easy to look up, oh man, that would save us all from some of the hairbrained code we’ve written to detect UA.

  20. Yeroc
    Posted January 31, 2011 at 11:53 am | Permalink

    Faruk,

    Just use the complete UA string. As you point out, anything less is problematic. There may well be 600+ unique ones around but if the detecting and caching is done server-side that’s hardly a big number.

    Corey

  21. Posted January 31, 2011 at 1:39 pm | Permalink

    Faruk:

    It doesn’t matter if you omit most of the UA’s, it only matters if the majority of *users* are getting the fast path. The set of UAs you need to have caches for is quite small, and therefore so is the list of UA + versions.

    This isn’t just about being right, it’s about performance and *not* being wrong.

  22. Posted January 31, 2011 at 2:30 pm | Permalink

    I guess this is my more complete response (I agree with you Alex):
    http://mail.dojotoolkit.org/pipermail/dojo-contributors/2011-January/023487.html
    and
    https://github.com/kriszyp/ua-optimized

  23. Bill Keese
    Posted January 31, 2011 at 3:02 pm | Permalink

    So, for example a library could have pre-built caches for IE 8.0.6001.18702, FF 3.6.13/win, Chrome 8.0.552.237/win, and iPhone safari 5.0.2? Seems like that will work great for a few months until the next maintenance releases for the browsers, but then the cache needs to be refreshed or it’s just dead weight.

    Alternately, did you mean to have a cache for IE8, FF 3.6, and iPhone OS4_2, assuming that maintenance releases won’t remove features or add bugs? That could be used for much longer, although eventually it too would grow stale.

  24. Posted January 31, 2011 at 3:10 pm | Permalink

    Hey Bill,

    Yeah, the second thing. Maintenance releases don’t change features. Point and sub-point releases do, though. In any case, I think we could even be relatively more specific if there’s a problem with broad sub-point versioning in practice (although, as I said, I doubt there will be).

  25. Posted January 31, 2011 at 3:46 pm | Permalink

    Alex,

    You still haven’t shown me how you can be right without making assumptions that I can guarantee you will be wrong at some point, whether they’ve been wrong in the past (as I have witnessed) or will be wrong in the future (as history repeats itself).

    The error ratio on UA sniffing is unbelievably higher than it is with feature detection; the latter’s speed impact is negligible compared to the damage caused to the Web by UA sniffing.

  26. Posted January 31, 2011 at 4:32 pm | Permalink

    @Corey Actually there are far more, especially if you consider mobile user agents. At Yahoo we have a database full of around 10,000 mobile devices. Because user agent strings vary even on one device (because of locale, vendor, versioning, etc), this has resulted in well over a half a MILLION user agents. It’s become pretty crazy to maintain, but is necessary because there’s really no alternative for all these feature phones, which can’t even run JavaScript.

  27. Posted January 31, 2011 at 4:34 pm | Permalink

    Lindsey,

    At some point it boils down to – are there valid reasons to be able to detect the UA? I don’t believe there’s a single JS library that doesn’t have some bit of UA detection. On browserscope.org and jsperf for instance we really want to get it right, and good gawd it’s nigh impossible today. Should we not be able to have sites like this?

    These are cases where UA sniffing is inevitable. Browserscope attempts to detect the user agent and its version — and there’s only one way to do that (if you want it to work cross-browser, anyway). I definitely agree with you that UA sniffing (when done right) does have its place in projects like these.

    Alex, on the other hand, is arguing that UA sniffing could (should?) replace feature detection.

    I believe the discussion is about UA sniffing vs. feature-detection in the case where you write code conditionally based on (the lack of) feature support.

    Browserscope does not fall under this category, and is one of the few examples where the use of UA sniffing is actually justified IMHO.

  28. Posted January 31, 2011 at 4:37 pm | Permalink

    Alex,

    As someone who tends to write posts this nature from time to time, I’m glad that you did. The last thing the web development community needs is for smart people to keep their mouths shut out of fear of comments and misunderstanding from other developers. I know it takes a lot of guts to do it and it also sucks when people misinterpret or misrepresent what you’ve stated pretty clearly.

    Please keep these coming, if for no other reason than it distracts people from my “controversial” posts. :)

  29. Posted January 31, 2011 at 4:49 pm | Permalink

    The mobile web has a long history of dealing with this sort of thing from well before the days of Modernizr, media queries, feature detection – or even JavaScript on the browser, period.

    User-agent sniffing *on the server* is more or less unavoidable for mobile (and certainly to distinguish mobile from desktop clients in the first place).

    And in the mobile web world, there’s never been this stigma about it either. a) you have no alternative for most of the world’s devices, b) how the hell would mobile users alter their UAs anyway?

    The problems in the mobile environment are different ones: the sheer volume of diverse browsers puts the desktop challenge to pathetic shame, and carrier networks enjoy completely messing with HTTP headers and bodies of both requests and responses if you don’t defend against them adequately.

    For newcomers to the wild & wonderful world of the mobile web, you might like to take a look at:

    http://deviceatlas.com (server-side capabilities DB)
    http://wurfl.sourceforge.net (an alternative)

    Neither of these have quite the level of detail (yet) that a contemporary HTML5 web designer might expect, but I think they’re working on it…

    Also, transcoder chaos & mitigation:

    http://wurfl.sourceforge.net/manifesto/index.htm
    http://www.w3.org/TR/ct-guidelines/

    Something to add to the mix, anyway; albeit rather far down your comments thread ;-)

  30. Posted January 31, 2011 at 5:33 pm | Permalink

    *sigh*

    This went where I knew it would. Faruk totally lost the plot, Mathias decided to go aloof and mis-represent me, and James…well…gosh James, that has pretty much zero to do with the topic at hand.

    Considering closing this thread now. We’ve clearly dug past the bottom of this barrel.

  31. Posted January 31, 2011 at 7:35 pm | Permalink

    Delete my comment, please. I was only trying to be helpful.

  32. Posted January 31, 2011 at 10:11 pm | Permalink

    Alex,

    I’m honestly a bit confused, and after reading through the comments, I back and re-read your post.

    You propose that “feature tests should only ever be run when you don’t know what UA you’re running in.” I think that sounds fair.

    However, how would one reliably determine what UA he/she is running in? For the aforementioned reasons, I would not consider UA string sniffing a reliable method.

    Just trying to understand how else you can *know* the UA. Am I missing something painfully obvious?

  33. Adrian Schmidt
    Posted February 1, 2011 at 1:43 am | Permalink

    Great post Alex!
    As said before, don’t let the haters get to you! :)

    My knowledge of UA-strings is rather limited, so I can’t tell whether Faruks opinions that it’s impossible to get UA-sniffing right is well founded or not.

    However, I would like to protest against the way Faruk and others use guilt by association. Just because it’s possible to misuse a technique or technology doesn’t mean it’s impossible to use it for good.

    I mean, just look at JavaScript.

    Another thought: Looking at how Modernizr (which I use and love) puts feature-detect info in the html-tag, could we develop a standard where browsers does this natively?

    It would of course have to be opt-in so it doesn’t break existing sites, but that could be managed with a header or meta tag (just like Chrome Frame).

    Of course, browsers could lie, but would there be a reason for them to?

    Not revealing a feature that exists would be plain stupid, and “revealing” a feature one doesn’t have would be useless and prossibly break the users experience.

  34. Posted February 1, 2011 at 5:35 am | Permalink

    @Alex: UA sniffing has always been based on the assumption that the dev knows best. As a user of a minority browser (Opera), I can assure you that they are generally incorrect. 90% of the time if something doesn’t work on a page in Opera, the solution is to ID as Firefox/IE. That’s the trouble with UA sniffing — most of it is done poorly and incorrectly. It has been and continues to be. I fully accept that I’m more leary about a solution that sounds like “cached UA sniffing” than most due to my Opera roots.

    I think it would be meaninful if you showed us a good number of the feature tests that you’d like to pre-cache and bind to a UA. That could potentially put my mind at ease. I would make sure your examples are bullet-proof. I’m personally tired of not getting functionality X or being blocked from service Y because some developer doesn’t know his stuff.

    An counter-example (case for feature-detection):

    Github’s slick new repository navigation doesn’t work in Opera. This is solely because Opera doesn’t support `history.pushState`. Based on their feature-detecting implementation, if Opera released a dev version of Opera today, it would simply start working with no changes or updates. How would your proposed solution respond in a similar situation?

  35. Posted February 1, 2011 at 6:17 am | Permalink

    Alex,

    […] Mathias decided to go aloof and mis-represent me […]

    I take it you’re referring to this comment of mine:

    Alex, on the other hand, is arguing that UA sniffing could (should?) replace feature detection.

    While I agree I was being a bit dramatic and could’ve thrown in a “partly”, I’m not sure what else to make of your article. After all, this is what you’re suggesting:

    [F]eature tests should only ever be run when you don’t know what UA you’re running in.

    To which I responded that’s the thing — you can never be sure.

    P.S. Faruk wrote a lengthy reply I wholeheartedly agree with: http://farukat.es/journal/2011/02/499-lest-we-forget-or-how-i-learned-whats-so-bad-about-browser-sniffing

  36. Posted February 1, 2011 at 9:10 am | Permalink

    Hey fearphage: I leave building that as an exercise to the reader. Seriously, not hard. I think Kris Zyp might have already done it, in fact. He’s badass = )

    Mathias: you’ve steadfastly asserted that you can never be sure…sans evidence. Remember, the point is to fast path what you have exact matches for. When you don’t have an exact match, fall back. It’s a much higher percentage proposition than traditional UA sniffing. Also, it’s not like feature tests are bullet-proof. Lots of browsers *appear* to have APIs that are then broken. If a new browser shows up with an existing-but-busted feature, you then have to add a test for it if you want to work around it. Both processes involve error. Pretending they don’t is deeply unconstructive.

    Speaking of deeply unconstructive, Faruk’s rant is head-scratching ’cause he’s arguing sideways, attempting to make it look as though I’m advocating something I’m not, and either generally mis-representing what I’m saying or misunderstanding it. The charming history lesson is particularly insulting, given that his entire point seems to be “don’t worry your pretty little head about performance”.

    Simply amazing. Guess I’ll have to respond by dissecting actual, real-world UA strings and how you can test them in a way that’s useful for knowing if you can use a feature-test cache. *Sigh*

  37. Posted February 1, 2011 at 9:51 am | Permalink

    ISTM a performance and compliance benchmarking suite might be useful here. One could compare the performance, comprehensiveness, and accuracy of Modernizr & jQuery’s FD against other methods, using a realistic cross-section of common and uncommon browsers and configurations. Might encourage some healthy competition.

  38. Posted February 1, 2011 at 3:00 pm | Permalink

    @Craig: I’ll ping Mike Samuel and see if he’s done any analysis on caja compiled output based on JSKB results (http://googlecode.blogspot.com/2010/06/reversing-code-bloat-with-javascript.html)

  39. Posted February 1, 2011 at 3:12 pm | Permalink

    Alex,

    You’re arguing that we should use UA sniffing instead of feature detection when we don’t know what UA we’re in, and your primary motivation (as driven home in your original post) for this is performance. You complain about a 30ms delay as if it’s the end of the world for that website; what if those 30ms result in different resources loaded, fewer HTTP requests made, and thus, possibly gained back by more efficient code? Then what?

    Your whole focus point is performance. I don’t think it’s a bad thing at all to be extremely aware of performance and spend lots of time trying to improve on that front, but I’m not going to ignore the vast amount of problems caused by UA sniffing which, factually, has an extremely high failure rate. Most people doing UA sniffing simply do it wrong, or not good enough. If you want to educate them to do better, that’s fine—I would support you in that, even—but it’s folly to think that you’ll suddenly transform the (tens of) thousands of web developers doing it wrong by talking about the performance hits incurred from feature detection.

    The thing is, UA sniffing has such a horrible track record that I simply don’t trust it in the hands of the world at large, period. I’ve had this discussion again on Twitter today, with a grand total of four people. Those four represent half of all the people I know online whom I trust to write good UA sniffing code. Everyone else? I’d rather they use techniques that are far less harmful.

    Lastly, another reason I’m so pro-feature detection, is that it has a clear and powerful roadmap laid out (whereas UA sniffing does not, and cannot). With feature-detection, we’re getting better and better at making our tests represent really accurate results across the board, and it forms the foundation of doing very powerful and scalable dynamic resource loading—stuff that cannot be done with UA sniffing alone anyway. With this dynamic resource loading, we can optimize performance dramatically, offsetting the cost of feature-detection easily. That’s why it’s the future of the web, and UA sniffing is the past.

  40. Posted February 1, 2011 at 3:14 pm | Permalink

    Oh and lest I forget: seriously, 30ms?! So much bigger fish to fry, man.

  41. Sasha Sklar
    Posted February 1, 2011 at 11:48 pm | Permalink

    If you’re trying to save $500 bucks a month, every $20 counts.

    These questions have been asked:

    – What happens if the browser or user spoofs their UA?
    – What happens when a new browser comes out?

    My questions are:

    – What happens when you send a desktop site to a feature phone?
    – What happens when you try and run Modernizr on a feature phone or an old Blackberry and Modernizr is the straw that breaks the camel’s back?
    – What happens when the combination of media queries and clever style sheets doesn’t scale beyond a blog POC and never will?

  42. Posted February 2, 2011 at 3:39 am | Permalink

    Thanks Alex. Right to the point. Awesome.

  43. Posted February 2, 2011 at 2:41 pm | Permalink

    …But apparently going to Local Storage is also expensive.

    It’d be great to have some testing done on this assertion since a LS-based solution would solve a lot of the problem here, though not all.

    I do agree client-side FT creates a lot of wasted cycles in aggregate, but I think this is an area where any wins from trying to solve this problem would be quickly overshadowed by maintenance headaches and unforeseen complications. As a long-time sufferer at the hands of UA sniffers (read: “Opera user”), the move towards FT was glorious. I’ll gladly pay 50ms/page for scripts that actually work.

  44. Posted February 3, 2011 at 1:59 pm | Permalink

    Alex,

    [Y]ou’ve steadfastly asserted that you can never be sure…sans evidence.

    What ‘evidence’ do you need other than the fact that UA strings can easily be spoofed?

    I know you think UA spoofing deserves to break sites, but I just have to disagree.

    Remember, the point is to fast path what you have exact matches for.

    What good is an exact match if there’s no way to tell if the UA string is correct or not?

  45. Posted February 3, 2011 at 3:39 pm | Permalink

    hey Mathias,

    Most browsers don’t provide a user-accessible way of spoofing UA strings. I know Chrome, FF, and IE don’t. In browsers that do, it tends to be a user-initiated response to bad UA detection in the first place. If you’re going to argue that we shouldn’t do any caching to help nearly everyone because we might disadvantage users who choose *explicitly* shoot themselves in the foot…well…I just think that’s a bad engineering decision and we’ll disagree about it.

    Everything has failure scenarios, every one of these tests, be it UA or feature testing, can break in the face of a UA doing something you don’t expect, and it’s all a question of playing the odds. With very tight UA tests (as I present in my next post and as I alluded to ehre), I like the odds for building caches without breaking folks.

    Regards

  46. Posted February 8, 2011 at 9:17 am | Permalink

    And what about lazy feature-testing ?
    http://longtermlaziness.wordpress.com/2011/02/03/lazy-feature-testing/
    (code: https://github.com/DavidBruant/LazyFeatureDetection)

5 Trackbacks

  1. [...] Reduce time spent on feature detection Posted on February 2, 2011 by Roberto Ok, I’ll go first: feature testing is motivated by a desire not to be busted, particularly in the face of new versions of UA’s which will (hopefully) improve standards support and reduce the need for hacks in the first place. Sensible enough. Why should users wait for a new version of your library just ’cause a new browser was released or because you didn’t test on some version of something. Cutting The Interrogation Short | Infrequently Noted [...]

  2. [...] Alex’s first post prompted much discussion and requests for clarification (is he saying user agent sniffing is more preferable?), which resulted in a second post with numbers to back up his position, along with more clarification, where he seems to be advocating a form of user agent detection on the server-side. Which is actually what many in the mobile world have been doing for quite some time now (i.e. WURFL), simply because there was no other alternative at the time (and the devices couldn’t run JavaScript). [...]

  3. [...] this series of screencasts I present my response to Alex Russell’s recent blog posts over the cost of feature testing. (mp4 available [...]

  4. By Lazy feature testing « Long-term laziness on February 8, 2011 at 9:28 am

    [...] Two very interesting articles ([1] [2]) made me realize that only old browsers do not have JavaScript getters and [...]

  5. By Assumptive Development | White Fire Creative Design on February 13, 2011 at 5:40 am

    [...] may lean towards feature detection, let it not be the only recourse. Alex Russell, for example, speaks of using UA detection as a first line of offence. Use it to determine capabilities among a known subset of browsers and [...]