Comments for Performance Innumeracy & False Positives
Would be nice if there was a Feature API/Spec. So that instead of has.js, or UA sniffing, you'd have to just (for example) Browser.has('html5:video').
This is an easy flag for the browsers to enable or disable when they've released new features and it's easy for JS developers to just check. Again the cost is precomputed prior to loading.
Fascinating post. UA is not the answer - what you really need is real time DEV CAP (device capability). We're getting ready to release an Android Browser that allows you to now only interact with the device but also gather real time HTTP traffic performance stats from inside the browser. Here's a sample of what the data will include: http://www.5o9mm.com/har/viewer/v.pl?path=accounts/5o9/android-02-04-2011-18-27-55-GMT-infrequently-org-2011-02-on-performance-innumeracy-false-positives.har
This is your blog post accessed from an Android device - you can see some of the dev cap info, carrier and also real time geo location in addition to the page elements.
The full release will include more detailed information and also support a JavaScript Mobile Performance library that will allow for automated performance testing.
Cheers,
Peter Cranstone co-inventor of mod_gzip
Thanks for this article, you're giving a well-researched voice to concerns I've had for a long time with feature detection.
The worst thing that I've seen so far, and that has haunted us in Prototype.js for a while, is that if you test for on IE when Java is not installed, a dialog box pops up asking if you want to install Java. That's not even measurable in performance terms, as it's a complete disruption.
While I hope that browser vendors refrain from this in the future, you never know when feature detection might cause similiar behavior, trigger a bug, etc. It's extra code that has to be executed, and by definition, sometimes it's really "tricky" code, because is testing something that's not there. Plus, you run into the issue of the false positives, etc., etc.
I really like your approach of pre-cached results with feature tests of a fallback. Awesome idea.
I'm all for things that'll let you test/cache faster! Hope this post didn't come across as a "you must do it this way" sort of thing. It was meant as a generic case for caching and for doing less work when you know you don't have to do it. Excited to see how your browser does!
Regards
Didn't that number used to be 100ms? Maybe that was eons ago. Have humans learned to expect smaller latency since then? I could believe it - after all, 16ms is an eternity when it comes to audio latency, our devices keep getting faster, we're consuming more and more caffeine every year, etc.
One question. Where did the 16ms number come from? I don't see a reference.
My only apprehension to this method is that the cost may just be being shifted to a new spot. (albeit, then cached)
When you build your table of pre-cached browser UA strings, you'll likely not want to spend too much by way of raw bytes to do so (since these have to be downloaded quickly, so the correct polyfill or interface can be loaded based on the feature support early in the site load).
So the obvious solution (and one you eluded to) is to only put the most popular browsers in your cache. IE6-9, a few FFs, a few Safaris, a couple of the latest iphone and androids UAs. That way the most people get the shortcut.
The only problem I can see with that, is that the slowest browsers are not the most popular ones. So we may be taking a shortcut the majority of the time, in a place where it didn't actually matter to begin with. Then we end up running the slow feature tests on the old blackberry device where the shortcut really would have come in handy.
Which is ok. Because maybe we could just switch our shortcuts around to ignore chrome and new IEs and Safaris, and really target old browsers more, for the shortcuts. (since they're the browsers that are probably going to end up needing extra treatment anyways). I think this list changes though, depending on your use-case. I just wanted to point out that the 90% coverage of browsers might not do you the most good.
Your code (just the part that matches UA strings at the end, which admittedly leaves out IE7 and 8), wrapped in a immediately invoked function expression closure compiles to: 293 bytes gzipped.
That's a latency free 293 bytes since it's part of something you already downloaded, but it's still not free. I'd think you'd have to weight the cost of 293 bytes (likely more since there should probably be quite a few more browsers in the pre-cache) and how long that would take to download into your equation first.
TL;DR ---------
I think some combination of both techniques is ideal, much like you are saying, but I think that depending on your use-case, it changes each time and may be entirely unpredictable. Hooray.
Performance work is always a tradeoff. That you have to make hard choices and give something up is no shock = )
I don't really understand Faruk's comments at all. This article is showing techniques and reasons to care about specific performance issues that combine to either produce the best or worst user experiences.
The general point about tailoring your application to the environment is valuable and valid. I think in a lot of cases we just are happy to see that something works, and as developers focus purely on the "Aha" moment and rarely use our own products as a fresh user would (meaning different browsers, not a quad core mac pro, etc).
Sorry, yeah, I implicitly meant visual latency. For other uses cases, 16ms is faaar too long, and even visually it's a far upper bound. Lower is always better, but if your screen only blits at 60hz, you can get near 16ms of execution time to play with.
Sorry I wasn't clearer.
DESKTOP Opera/9.80 ($OS; U; $LANGUAGE) Presto/$PRESTO_VERSION Version/$VERSION
MOBILE Opera/9.80 ($OS; Opera Mobi/$BUILD_NUMBER; U; $LANGUAGE) Presto/$PRESTO_VERSION Version/$VERSION
MINI Opera/9.80 (J2ME/MIDP; Opera Mini/$CLIENT_VERSION/$SERVER_VERSION; U; $LANGUAGE) Presto/$PRESTO_VERSION
But I'd agree with Alex here that the combo of regular expressions and support lookup hashes introduce enough extra code that it's cost on the wire would probably outweigh the runtime performance benefit.
However.. if this happens on the serverside, then we're mostly in the clear from that problem.
Worth noting: All this stuff is very closely aligned to the JSKB idea: http://google-caja.googlecode.com/svn/trunk/doc/html/jskb.html
But thus far the problem with that is everyone has their own UA parsing logic. As long as people parse UA differently, it's better for them to be using reliable feature detection code. But I think we can solve the UA parsing inaccuracies as well.
Since this conversation began, I've talked to Lindsey Simon about this.. He wrote the UA Parser that's in use on Browserscope and some other properties: http://code.google.com/p/ua-parser/ The end goal is basically a port of the regex's and parse code to all primary web languages, plus a PubSubHubbub-style subscription service (free).. kinda like virus signatures, that keep you up to date with any emerging browsers. I think having a strong set of regex's that have community approval.. that's the only way to execute on this plan.
In general, it seems like taking the good work WURFL has done and expanding it to be much better at UA detection, and then expanding the capabilities to capture the interesting client-side stuff we're curious about.
Anyway, Lindsey and I are quite enthused about this idea, and think it can combo well with clientside feature detection. If anyone else is game, let me know.
Hmm...the amount of code here for the caches vs. the amount you need for the tests themselves is relatively small. You can also only provide tests for the most popular browsers and cache results only for the most frequently hit tests (or the ones that are most likely to do expensive operations). As long as the cache is read-through, you have complete flexibility.
In any case, if you're doing this server side, you already have better options as you can afford hash-based UA lookup and a much more complete UA dictionary, allowing you to skip sending the feature test code in nearly every case anyway.
Regards
I think there are already projects out there that do something close to what you are suggesting. (Caja Web Tools, embed.js, even MooTools)
I noticed several of your UA sniffs (IE6, IE9, FF3.6, Chrome 8, ...) failed UA strings I've tested against. UA strings are tricky and getting a correct result is a pain.
On a side note, has.js tests aren't necessarily meant to be executed all in one shot. They are designed to allow for lazy testing. This can reduce the initial perf hit by allowing devs to check them when needed and not all up front.
I dig profiling environments for features, and conditional builds, but I don't think using the UA alone is the best approach. With all the talk of milliseconds and nanoseconds I am reminded of something you wrote in 09, "Fast enough is fast enough, and the bottlenecks are elsewhere in toolkits today."
I'm a big fan of what embed.js is doing.
I think you're defining "failure" wrong, or at least differently. The difference between my regexes and what folks normally do is that not matching a particular UA is OK. It's only failure in this world if you're missing the majority of your traffic (too many false negatives) or if you falsely match a UA that you shouldn't. Remember, the inversion I'm pulling here is moving the emphasis away from UA testing that needs to fingerprint every UA to testing that needs to get the right answer most of the time with zero false positives.
If you've got a case where I'm generating a false-positive, would love to know, though.
Regards
That's what I meant. For example your UA sniff for IE9 won't match IE9 because its UA contains "Mozilla/5" not "Mozilla/4".
If you’ve got a case where I’m generating a false-positive, would love to know, though.
I linked to a general example of a false positive in my previous comment. For a false positive with one of your UA sniffs you can look at this. Other browsers like that can be troublesome because their UA strings are so similar.
Thanks for the IE 9 tip. Fixed in the article body. Somewhat humorously, the fact that my test was busted sort of proves the point that strictly-written tests fail closed and therefore fall back to feature testing, which means that things aren't actually broken, just slower.
As for SkyFire, Sleipnir, and CometBird, AFAICT, their rendering and JS execution environment are the stated versions of FireFox or IE, respectively. Also, the CometBird UA doesn't pass the regex I posted.
Looks to me like we're in good shape: low-to-no false positives (vs. the deployed bulk of browsers) and fast paths most places.
Thanks again for the fix on the IE9 UA.
Regards
Thanks for this blog post, it really clears up what you meant by "I say you shouldn’t be running tests in UA’s where you can dependably know the answer a-priori." I think the method of doing the kind of strict UA matching is a lot saner than 99.999% of what's on the web right now.
Initially, I suspected your suggestion was more along the lines of: https://gist.github.com/810674 (not a strawman, actual production code from http://slides.html5rocks.com). It makes some assumptions about what browsers can do, caches the "test" result...and locks out IE9, for example. A fine example of short-sighted code.
I can see that's not what you're advocating. That's a good thing.
My immediate concern is how this will affect the browser I use on a daily basis, Opera (disclosure, I also am employed by them). Opera is in an interesting position in that it's got say 2% global market share on Desktop, yet locally and regionally much, much higher (ignoring Mobile for the moment, where some countries are as high as 95% Opera) e.g. Russia and the rest of the former Soviet Bloc at about 30% (http://gs.statcounter.com/#browser-RU-monthly-201001-201101).
Since it's got such a small market share here in the US many large sites won't (and can't) justify QA costs and outright block the site or serve a crippled version (based on UA sniffing, of course). For example, take Netflix. Despite serving their streaming video via Silverlight (which works with Opera), they outright block the UA. Lucky for me, I can easily tell Opera to "Mask as Firefox" and suddenly my UA is "Mozilla/5.0 (Windows NT 6.1; U; en; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6". Yay, I get to stream Dirty Jobs now.
In these situations, I'm now Firefox and a UA matcher like the one described here is going to tell me that I have access to the File API (or whatever...), except I don't. Not very awesome.
So lets consider the things you're scared about: web developers are doing the wrong thing, and you're employing a hack to get around it. Fair enough. But the technique I'm describing, and the location in the ecosystem I'm advocating it's use, is way upstream from the problem you're hitting. Hopefully, by doing things the way I'm describing, we can keep apps from turning browsers away in the first place since the libraries and tools they depend on "Just Work (TM)" in browsers they don't understand. The question of what should a bit of code do in the spoof-to-get-around-bad-UA-detection case isn't something that's even up for consideration here. Nobody's advocating that sites should block unknown UAs, and for that matter, nobody's seriously advocating UA spoofing. What libraries should do, then, is pretty straightforward, and our advice to web developers doesn't change: just Do The Right Thing and rely on feature tests. The only addendum here is that, when you can, also fast-path the common cases. If we advocate for that, then you never hit the problems you're describing in the first place.
Regards
We still don't have a collision that indicates any test that should fail would fail. I'm totally willing to concede that there's some risk here -- as there is in feature testing -- and that we should mitigate as far as possible. I didn't outline other possible approaches as they're not as easy to read in code or as terse, but using hashes of the UA string is one possible alternative for even tighter checks.
As for IE compat modes, we should find out! Data is the best baseline for any of these conversations.
I want to hope so, but as this series of posts was based on incorrect measurements, from word of mouth instead of your own tests, compounded with improper usage of has.js, and rushed/incorrect regexps it's not looking so great.
If used correctly there is no performance concern and no reason to inject UA strings, and the added uncertainty they bring, into the mix. However, if you find a better/faster way to perform a specific feature test please submit a patch.
I think the fact that you can get false positives shows how fragile it is. I am sure there are more examples of false positives than the few I mentioned, and I wouldn't assume that just because the UA slips through that it has the same feature set. As for CometBird, I didn't say it passed or not, only that browsers like it can be troublesome.
> Looks to me like we’re in good shape: low-to-no false positives (vs. the deployed bulk of browsers) and fast paths most places.
I'm skeptical, as you expand/fix your handful of sniffs to more browsers and versions I could totally see things like IE compatibility modes and mobile browsers causing headaches.
Excellent points in this post. Thanks.
Isn't there much more to it? Shouldn't we aim to write clean code free of branching which isn't really needed? Shouldn't we be looking for ways to distribute code onto different types of devices? We're talking mobile, but really, look at the numbers: tons of tablets expected this year with different resolutions and input paradigms, TVs having embedded browsers, even cars running a browser dashboard!
I want to write stuff for these environments and for me, shipping down the wire everything feels completely wrong (I know I am targeting a TV and I even know which one). We should be careful being stuck with old models and automatically applying them to new contexts. JavaScript is in much wider use now than that we should blindly follow old patterns.. Then again it completely depends on context, if you target traditional websites and high end phones, maybe feature testing is exactly the right thing :) Just don't jump into it too fast!
I've been struggling for sometime with the whole UA detection/feature detection/object inference...
For instance GWT (Google Web Toolkit, for those not in the know) uses a combination of UA detection and object inference for it's deferred binding mechanism.
if (ua.indexOf("opera") != -1) { return "opera"; } else if (ua.indexOf("webkit") != -1) { return "safari"; } else if (ua.indexOf("msie") != -1) { if (document.documentMode >= 8) { return "ie8"; } else { ....
I asked why they chose to parse the ua string, which can lie e.g.
- Certain addons to IE alter/break the ua string (I can't recall which) but I've seen server logs with "...MSIE 6; MSIE 7;..." in the ua string
- Opera (historically) allows the user to switch the ua string
- Sometimes browsers report the wrong string e.g. Maxthon installed over IE 6 reported an IE 7 ua string
- Currently no browser adheres to the standard for UA strings (e.g. product/version see RFC 2616) they would all be Mozilla 4 (or 5)
I got no answer (from Google).
I realise that using Object inference to determine the browser/version can sometimes be broken by client side code but personally I've found it to be more robust than using the UA string.
Just my 2p. :o)
Cheers, Dave
Seems like a lot of people had an easier time understanding your intentions this time around too :)
I think Faruk has let himself down by being a crybaby. No offense, but I had to say it. And concerning his post "Lest we forget"; I just assume that stuff like this would be made available through libraries, just like feature tests are. So people like me, who are not experts on these subjects (at least not yet ;) ) will be able to use it just as safely. So why go on crying about that us 'regular' people aren't competent enough to use these techniques?
Sorry for the rant. Great post, again :)
/Ad
PS: Your Chrome and Safari if-statements check for Firefox: if (ua.search(FF36) == 0)
Problem solved. HTML5's tagline should be: Using yesterday's technologies, today. (I can't tell you how often I've thanked Microsoft for VML, given its similarities to SVG. Sure it'd be nice not to use it, but I'm glad it's in IE6 even so...)
Have the feature detection library become a few character boot strapping JS plus an iframe that loads a container page with the actual detection library lined in. Then cache the results of the detection in localStorage.
While this can only be implemented with high performance on modern browsers (postMessage being most important), luckily most mobile browsers belong to that category.
That said I agree that ua based caching makes sense here. I discussed has.js with Peter Higgins last September and it was my impression that his plan was to actually built in such a mechanism (then again this was in Amsterdam, so :)
Cheers Malte