Weblogs: Javascript

Disabling JavaScript: Asking the wrong question

Saturday, October 16, 2010

On the Yahoo developer blog Nicholas Zakas has attempted to calculate how many users have JavaScript disabled, and using his published findings claims that developers can use the full range of JavaScript functionality.

This 'takeaway' is unfounded, misleading and, considering that large number of JavaScript obsessed developers lack, or prefer not to exercise critical thinking, Zakas' 'takeaway' is downright dangerous and astonishingly uninformed.

Zakas calculates that the number of people arriving on the Yahoo homepage with JavaScript disabled it is 2% of the measured audience. And unfortunately, he also makes an incongruous leap of logic to conclude that "the overwhelming majority of users has JavaScript-enabled browsers and can therefore take advantage of all the enhanced functionality and dynamic interfaces developers and designers like to create".

Already developers are taking that to heart and using it to cement their approach of building JavaScript dependent websites because, it's just two percent they can afford to lose.

The non-binary state of affairs

However Zakas' claim isn't substantiated by the data he provides. He has made two fundamental errors:

The first is acknowledged in the post's comments, but not expanded on in any meaningful detail. And the second fundamental error is not mentioned at all.

A browser cannot execute JavaScript it has not received.

Reliable indicators of JavaScript execution

The availability of JavaScript in a browser is not a reliable indicator that published JavaScript code will run in that browser. (It is baffling that a developer of Zakas' visibility fails to grok this).

There is one substantial reason (other than a JavaScript error) why a fully capable browser with JavaScript enabled still fails to run JavaScript: the browser engine doesn't receive the JavaScript.

This isn't just about network outages.

Lets try a thought experiment. What actually happens when a browser encounters a script tag that references an external JavaScript resource?

Obviously, with JavaScript enabled, the browser is sure to try requesting it. There are several pitfalls that are out of the web developers hands in getting the browser from noticing that there's a reference to a JavaScript resource, and getting back the actual resource:

  1. The domain serving the JavaScript file is reachable. This tends to catch out sites who host all or parts of their JavaScript outside their main domain. Remember, even Yahoo themselves have site outages. Think Content Distribution Networks, particularly those known to be banned in certain countries. No domain name resolution, no JavaScript from that domain for the browser to run.

  2. The resource must be retrievable at the specified URL. This is within the hands of the web developer, so they are able to get that part right.

  3. There must be a path, or series of alternate paths from the server back to the browser. For a network that was built to survive a nuclear attack, the Web today is still brittle, and outages in hops between the server and browser are still a regular event. No path, no file, or even one irretrievable internet packet, and the browser has no JavaScript to run.

  4. The network hops along the route could end up zero-lengthing the JavaScript. Certain ISPs have been known to sniff or fingerprint resources, looking for malicious code, and removing any matches. What is deemed to be malicious is whatever the configured software deems it to be.

  5. The network provider could end up filtering JavaScript code that trips up it's malicious code-sniffer, and prevents the browser from receiving the file by replacing it with an HTTP error response. Certain conservative organisations regularly drop JavaScript resources that inject Flash in the page, regardless of whether that codepath is actually executed or not. (Flash is used in some libraries to emulate local storage, or fudge multiple file uploads.)

  6. Browser extensions could decide to strip out JavaScript, either directly from the DOM, or chop the contents of JavaScript block to zero bytes. Being able to detect this behaviour through beaconing a noscript element probably may be undetectable.

It doesn't take a genius to figure out that the above scenarios are all additive, and as the Web continues to further embrace diversity and flexibility, not only in the choice and configuration of user-agents, but also the networking mechanisms being used; progressive enhancement is very much alive as a cornerstone of web development best practices.

Sure, the iPhone has a particularly capable and very functional mobile version of Safari, but the experience of a web site on the device also depends on the stability and speed of the network connecting the browser to the web server.

Just earlier this week I had my Macbook at home connected to the Internet over 3G. I noted how unusable the (old) Twitter web client was. The network brittleness was exponentially compounded by Twitter's JavaScript dependency. This was still slightly better than a month ago when I was connecting the Macbook to a different 3G provider while on a train heading home.

Object detection

Apart from not receiving the JavaScript to execute, the JavaScript code itself may use object detection to determine that the browser is not suitable to run the core code. This technique, along with unobtrusive JavaScript, is what's relegated the noscript element to the tag scrapheap.

Modern web development best practice encourages the use of object detection as a means for establishing whether a browser is capable of running the supplied JavaScript.

The previous use of object detection has been mitigated, in large, by the use of well-supported JavaScript libraries. But I forecast a strong re-emergence of object detection to deal with browser-incompatibilities with HTML5, at least until browser support becomes mostly consistent.

Pre-selected sample

The numbers Zakas provides are skewed in several ways:

The omission of the above scenarios means that the end-totals are likely to be under-reported, and from the detail Zakas has provided there is no practical way of gauging whether his numbers are off by a few percentage points, or something more substantial.

Pseudo-science

Despite the appearance of a disciplined scientific approach to collating the data, the post fails to follow the typical scientific pattern of clearly documenting the methodology and the assumptions made during the approach.

This makes it difficult to both clearly understand the actual bias inherent in the collected data, and, more importantly, deprives the web community of a way of independently verifying the results, or producing comparable data to isolate discrepancies.

So what we have is just pseudo-science, and should not be relied on in making any decision on JavaScript-dependency on the World Wide Web.

Asking the right question

I believe developers asking the question of "How many people have JavaScript disabled" are asking the wrong question. Sure, it can be measured, but the actual result does not adequately answer the question of "How many people will not be able to use your JavaScript-dependent website?". That is a far more critical question.

The choice of whether to depend on JavaScript or not still depends on the demographic of your intended audience, and the use-cases your site intends to fulfil. Consider the mobile needs of your audience; their network quality and speed; their location and culture. Consider how that relates within the context of your website.

Sure, if your website offers a service that is only of use for people with modern desktop/laptop based browsers connecting on solidly stable, non-filtering network connections fairly adjacent to your application servers; then you may have a reasonable justification for making it JavaScript dependent.

But if any of the scenarios mentioned above apply, then the modern web development best practice of progressive enhancement is definitely a more applicable approach.

Progressive enhancement: it's the safety net when things outside your control interfere with your website.


[ Weblog | Categories and feeds | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 ]