Weblogs: Web Accessibility

Questioning SiteMorse accessibility coverage

Sunday, May 14, 2006

There are two statements from SiteMorse I find particularly concerning. First is the standard disclaimer they use about the limits of automated accessibility checking, and the second is the time-saving calculation. Both seem bereft of detail which make it confusing to clients.

Limits of automated accessibility checking

From the recent SiteMorse / MediaSurface thinly disguised Content Management System sales pitch fiasco:

Note: The range of tests (Web Accessibility Index [sic] WAI) that can be conducted automatically are limited and 100% compliance with automated tests does not mean 100% compliance with the requirements.

Its like those little lies kids exasperatingly tell their parents - we didn't eat all the cookies. I've not seen any public statement from SiteMorse that nails down an accurate figure of how much of the Web Content Accessibility Guidelines the tool claims to test. All we know is that its not 100%.

It of course begs the question - so its not 100%, what is it? How much of WCAG 1.0 remains untested or untestable for SiteMorse? This is the accessibility testing that is needed above and beyond the mere use of SiteMorse.

Surely, as an organisation that has SiteMorse as a supplier needs to know how much testing is outstanding. Its possible to produce an inaccessible page that SiteMorse considers perfect. So the figure is somewhere between 0 and 100%.

My own investigations into the reporting tool itself uncovers a number of holes where manual checks are not raised. It also flags manual checks for features that don't exist (for example flagging use of scripting warnings - checkpoint 6.3 - when a page contains just an image). Getting these sorts of results from a cursory test is a concern.

Local governments must be asking (I'd certainly like to hear responses from Local Government sites using SiteMorse):

Its only a complete answer to the above can Local Government be capable of performing adequate manual checks to ensure the accessibility of their websites - so this information must be common knowledge somewhere. Right?

Time saving

A recent press release from SiteMorse talks about the time saving using the SiteMorse tool. Involving the Office of Public Sector Information (OPSI).

Calculating savings delivered by automated testing: The time needed to manually test 125 pages, as thoroughly as SiteMorse software does, is at least 90 man hours.

So how much accessibility testing is left after the SiteMorse check? That's strange - for a company that is supposedly that interested in accessible websites, there's hardly a mention. I find it difficult to believe this is just an oversight.

I find the claim that SiteMorse saves 90 man hours in testing 125 pages dubious, especially for 125 pages on the same website. A break-down of how they arrived at this number would be useful. That would of course start to highlight what the SiteMorse tool checks, and more importantly what it doesn't check.

A question to OPSI, how much testing did SiteMorse say was needed after their automated checking was done? Also, what are the list of checkpoints that are manually checked (or rechecked - to eliminate false positives and false negatives) after a SiteMorse run?

It's a concern that local governments are paying too much attention to SiteMorse rankings at the expense of accessibility. Some of the techniques for getting a higher SiteMorse score have included the removal of heading structures from a document - that is detrimental to web accessibility.

When SiteMorse is pressed on the question of how their score matches with the true accessibility of the site, they fall back to saying that basic level accessibility checks are not critical since they only make up 12% of the SiteMorse score. That 12% seems ridiculously low for the purposes of ranking the accessibility of local government websites.

Related reading:


[ Weblog | Categories and feeds | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 ]