skip to main content

gerg.dev

Blog | About me

Testability as observability and the Accessibility Object Model

I attended a talk today by Rob Dodson on some proposals for the Accessibility Object Model that are trying to add APIs for web developers to more easily manipulate accessibility features of their apps and pages. Rob went through quite a few examples of the benefits the proposed APIs would bring, from simple type checking to defining semantics of custom elements and user actions. Unsurprisingly, the one use case that stuck out for me was making the accessibility layer testable.

It’s commonly cited in accessibility circles that only a small fraction of potential accessibility issues can be checked automatically, typically around 20% or so. Colour contrast, alt tags on images, and page structure, for example. I’ve used the axe-core library for this in my own automation work and it’s been quite useful to flag potential issues. The 80% that can’t be checked as easily represents the wide range of human experiences, abilities, and intentions.

I doubt the proposed AOM APIs would tip that balance very much, but it did look like they’d add a useful standard way to get information about the accessibility properties on an element. This was especially important given the many points the semantics of a tag could be defined, how each interacts with the others. The example Rob gave for testability I could see being put into use something like:

const node = getComputedAccessibilityNode(...);
expect(node.role).toEqual("button");

There was a long list of properties that would be exposed on this node, all of which could be checked in unit tests or any other context. This was really the only thing he talked about that directly touched on testing, which made me think: is observability the same as testability?

A quick search online for testability turned up a fair bit about design principles like SOLID, but principles like that seem only to be about making things less complex. I can imagine products that have simple internal architecture but are still intractable as a subject under test. The other half of resources online talk about testable from a scientific standpoint, which leads to falsifiability. I’m going to be completely simplistic for a moment and say that’s what the expect assertion guarantees: falsifiability by answers a true or false question. Whether it’s the right question, well…

I think there’s a good argument to make that observability is one of the most important dimensions of testability. If you can’t see what’s going on with a product, you have no hope of saying anything cognizant about it. Better APIs for querying an element certainly contribute to that.

Observability alone might have been all I had to work with as an astrophysicist, but with software we can, and should, do better. Testing is all about investigating what happens in different situations, which requires some kind of control over what is happening and when. The AOM APIs certainly give the developer a lot more control in defining the interactions in the first place. Some of the new semantic events could give a tester a different type of control for simulating user actions, though it sounded like they are still very tentative. The key, for me, is that any fancy custom accessible elements still need to provide ways of poking and prodding them from the context of an automated check to be testable. I not only need to see what happens, but I need to be able to see what happens given whatever arbitrary input my heart desires.

Unfortunately, classically, out of the five major motivators for the AOM proposals Rob reviewed, the testability work was explicitly at the bottom of the list. As a tester it’s always a bummer to hear that sort of thing, but I hope it only emphasizes how much of a marginal change it represents over existing ways of probing the accessibility attributes of a page. As long as they remain accessible that way, I can understand it. If, however, the accessibility information that gets hidden in the AOM—touted as one of the perks of the proposals—becomes inaccessible to tests in the meantime, then that’s a problem.

Testability in software = observability + control. What am I missing?

About this article

Leave a Reply