|Keynote Speech: Beyond Conformance: the role of Accessibility Evaluation Methods|
The topic I want to address in my speech is the role that accessibility evaluation methods can play in helping the transition from accessibility viewed as standard conformance, to a user-centered accessibility. As we will see, this change sets additional requirements on how evaluations of websites should be carried out.
There are different problems that occur while dealing with accessibility.
The W3C/WAI model of accessibility aims at universal accessibility, it assumes that website conformance to WCAG (Web Content Accessibility Guidelines) is the key precondition to that, and it hypothesizes that accessibility is entailed by a conformant website if two other conditions are met. Namely, that the tools used by the web developer (including CMSs) are conformant to ATAG (Authoring Tools Accessibility Guidelines), and that browser and assistive technology used by the end user are conformant to UAAG (User Agent Accessibility Guidelines). However, since both these two conditions are not under control of the web developer, the conclusion is that the developer cannot guarantee accessibility, whatever efforts s/he may put it. In fact, empirical evidence shows that the link between conformance and accessibility is missing, i.e. even conformant websites may fail in being accessible.
Confusion exists regarding the methods to use. Some regulations require that accessibility evaluators have to perform a cognitive walkthrough, on the basis of 12 general usability principles that are generally employed with heuristic evaluation, followed by a subjective assessment, and averaging of ordinal severity levels. This approach in my view it is unlikely to succeed because of extreme subjectivity and variability, poor practicality and measure-theoretical shortcomings.
As evidence of further confusion, consider the Target versus NFB legal case in the U.S.A., where the Court decided that the question whether target.com was accessible or not could not be answered. There is substantial variability, and lack of standardization, in the way pages were selected, in the way accessibility was investigated, and in the way a conclusion was drawn. Witnesses of one side were referring to user performance indicators, the others to conformance features.
Additional evidence exists showing that accessibility evaluation based on a sample of pages (sampling is necessary for all but trivial websites) can be affected by the criteria used to select the sample. There is interdependence between the sampling criteria and the purpose of the accessibility analysis, leading to large differences in accuracy.
My claim is that to change this state of things we have to focus on how to standardize methods, and through them aim at an accessibility that is sustainable; in other words, we need to shape and establish effective accessibility processes that can be sustained mainly by their own return on investment.
At least two issues have to be addressed. First, accessibility evaluations have to produce sets of accessibility problems that are prioritized by their impact: in other words, evaluations should identify problems whose solution makes a difference in accessibility as viewed by stakeholders. Therefore, evaluators and developers can focus on these problems first, and optimize their resources. Secondly, accessibility processes (taking place when conceiving, developing, maintaining, revamping websites) should be effective and efficient, and these properties should be the result of scientific investigations. When these two conditions are met, then accessibility methods can be compared and chosen on an informed basis, and this will lead to more accessible websites/web applications that in turn will positively affect key performance indicators related to the underlying business the website should support.
As a consequence, we need a clear definition of what accessibility is and how it should be assessed. The accessibility model discussed in the paper has precisely this role.
Several existing evaluation methods are then reviewed and discussed, a simple taxonomy is presented, and differences that occur when evaluating accessibility rather than usability are pinpointed.
|Keynote Speaker Biography|
|An excerpt of Giorgio Brajnik's curriculum can be found at: http://sole.dimi.uniud.it/~giorgio.brajnik/vitae.html.|