As accessibility advocate Marcy Sutton is fond of saying, JavaScript is a civil rights issue. That’s true whether you’re building the next big web app or a dynamic chart for an article: we want our journalism to be available not just to sighted and able-bodied readers, but to everyone. Yet as interactive developers, it’s often easy to forget about accessibility–and easier still to think that the “visual” part of data visualization means it’s only for sighted or able-bodied users.

Over the last couple of years, I’ve been experimenting with accessibility and assistive technologies for interactive projects here at the Seattle Times. From that experience, I’ve come to believe that it’s not actually that hard to make sure your news apps are accessible. It’s the least we can do, if we’re serious about our commitment to reaching a wider, more diverse audience. Moreover, as Target discovered when sued by the National Federation for the Blind, inaccessible sites may violate state or federal regulations.

The following post contains a few of the lessons I’ve learned, and hopefully serves as a good starting point if you’re just getting started with accessibility on the web. I want to stress that I’m not an expert: there are plenty of other resources that you should pursue if you find this information useful, such as the Web Accessibility Initiative. If I’ve made any mistakes here, or if you’d like to share additional posts on newsroom accessibility, please let me know!

What is accessibility?

At heart, accessible design means that we want our pages to be friendly to readers who may face sensory, motor, or cognitive barriers to access. We typically think about this in terms of extremes, such as blind or paralyzed users. However, as Microsoft’s Inclusive Design team states, disability is a spectrum that almost all of us will fall under at some point due to age, injury, or context.

I like the idea of accessibility as “inclusive” design, because creating more accessible products tends to create better experiences for everyone. Closed captions may have originally been created for deaf users, but they’re useful for many people, including those who need to watch video quietly or who are learning to read. One-handed controls are appreciated by people with reduced dexterity, but also new parents and bus riders. Improved contrast benefits users with reduced vision, but it also makes a cleaner and easier-to-read UI for everyone.

For the purposes of this article, we’ll sort accessibility into three buckets:

  • Making interactive content work for blind users via a screen reader
  • Providing support for keyboard navigation
  • Ensuring sufficient contrast for low-visibility and color-blind users

Enabling and using a screen reader

Perhaps the easiest way to experiment with accessibility is by using a screen reader on your phone, since both major mobile operating systems come with voice assistance built-in. Rob Dodson from Google has some short videos that detail how to activate and use Talkback on Android and Voiceover on iOS. Once activated for the first time, you can use a shortcut to toggle the reader on and off at will, which is useful for quick testing.

Even as a sighted person, using a screen reader on a phone can be a handy skill to have: I like to use Talkback to browse on the bus or when it’s cold out, so that I can operate the phone without removing it from my coat pocket. Spending just one day a month trying some of your regular reading through the screen reader will give you a new appreciation for how difficult news sites are to navigate for blind users.

On desktops, the situation varies a bit more. All OS X machines also have access to VoiceOver, which can be activated through the System Preferences app. Windows users may use one of several programs, such as JAWS or NVDA. I tend to use the latter, which is free and open-source, since JAWS is extremely expensive.

Most desktop screen readers can be configured to show the voiced text onscreen in addition to being read, which is useful if you don’t want to wear headphones at work. In fact, one of my favorite features of NVDA is the Speech Viewer, which will log out all spoken text (and, if configured properly, will do so silently). Being able to instantly read and look back through the output from the screen reader can dramatically speed up development.

A screenshot of the NVDA speech viewer window.

Turns out it’s really hard to grab a Speech Viewer image without also leaking what you’re working on to the world.

Easy wins for screen readers

Once you have a screen reader up and running, you’ll notice that the way it handles pages is highly dependent on the document structure. This is part of why it’s important for our HTML documents to be well-formed: by using semantic tags, the browser is able to correctly announce headers, sections, asides, navigation, and other features.

Of course, if we were just building documents, our jobs would be relatively easy. Creating applications that correctly announce themselves via screen reader is a little bit more difficult, but there are some easy wins that we can get by working with the platform:

  • Don’t forget to add alt text to images whenever you can, and avoid the title attribute for any required content. Alt text should describe an image conversationally, not just restate the caption or data directly (e.g., “A chart showing an upward trend in underwater basket weaving classes.”). If an image is purely cosmetic, hide it from readers by setting¬†alt="".
  • Unless you actually mean to link to another page or part of a document, use <button>¬†for interactive elements instead of <a> (or even worse, <div>, which readers will simply skip). You can style buttons to look any way you want, but the screen reader will still understand that they’re controls that do something locally, instead of a link to content elsewhere.
  • In fact, you should use native controls (like <select> or <input type="x">) whenever possible. The built-in elements are well-tested, are much more compatible with screen readers, they handle keyboard input more reliably, and they support features like the disabled state, which will be announced properly. It’s almost never worth giving all that up just for a bit of styling support.

As interactive news developers, however, part of the job is building dynamic visual experiences that don’t already have convenient semantic tags. In these cases, the role attribute is your friend, as it allows you to specify the spoken meaning of a block of DOM. Setting role="img", for example, will tell screen readers that the marked tag contains primarily visual content, and shouldn’t be read as literal HTML. This is extremely useful for charts that would be difficult to describe (just remember to add an alt attribute describing what readers would see).

You can also use roles to force an immediate announcement, although you should use this rarely. By setting role="alert" on an element, the browser will immediately read its contents to the user. For less-urgent announcements, you can set the aria-live attribute, which will ask the browser to monitor an element for changes and announce them when it won’t be disruptive. ARIA (Accessible Rich Internet Applications) attributes vary in support, but it’s worth reading up on their use if you want to work well with screen readers.

Finally, a staple of designing news apps is being able to show and hide content. It’s important to be careful with how you do this, since content that’s not “visible” may still be “readable.” Long story short, keep it simple: set display: none or hidden="true" instead of using absolute positioning or opacity: 0. Conversely, if you want something to be visually hidden but spoken aloud (such as the text version of a graphical headline), using opacity: 0 or a very small clip rectangle is good practice.

Keyboard navigation

Here’s another experiment you can try: browse your favorite web page without a mouse. Your tools are the tab key (which will cycle between all links and inputs on a page), the arrows (which scroll and interact with form controls) and the enter key (for “clicking”). How tough is it to navigate your news site this way?

Readers with motor dexterity issues may be navigating your site with a clicker switch or a joystick. Those with RSI may be restricted to the keyboard. Even mouse users often use the keyboard for operating select boxes or tabbing through form fields. If you go with the grain of the platform, writing semantic HTML and using mostly built-in widgets, you’ll get keyboard accessibility for these users for free. Problems usually crop up when we start to roll our own solutions.

Here are a few easy wins to keep in mind:

  • Don’t disable focus rectangles with styles like input:focus { outline: none }. Without a visual indicator of where the current keyboard focus is, users will be completely unable to track which item is active when pressing the tab key.
  • As with site readers, use <button> instead of <a>, and try not to use <div> for interactive elements.
  • If you must use a non-semantic element for interaction, be sure to set tabindex="0" to enable interaction in “source order” (i.e., the intuitive top-to-bottom order of elements). Likewise, if you cannot use the disabled attribute, be sure to set tabindex="-1" to disable keyboard navigation on controls that can’t be pressed.
  • If you absolutely must design custom controls, do they match the keyboard behavior of existing controls? For example, the arrow keys should move between radio buttons in a group and options in a select box, and tab should change focus between controls correctly.

Visual affordances

Finally, when designing graphics we need to bear in mind that not all sighted users are created equal. Vision can vary widely, not just by disability but also by age and by context. When designing inclusively, we may want to start with contrast, color blindness, and motion sensitivity.

Contrast is simply about making sure that there’s enough difference between background and foreground elements to easily read page content. The WAI recommends a contrast ratio of 4.5:1 for normal text, but how do we work out the ratio? It turns out you shouldn’t need to: Chrome’s dev tools include an accessibility audit that will check the color of all text against its background, and warn you of any elements that do not offer sufficient contrast. The axe-core extension will also check for contrast, among other issues.

Color blindness is another pesky issue, especially when creating graphs or charts. The good news, in my opinion, is that graphics friendly to color-blindness tend to be better for everyone: they have better contrast in their palettes (so that users can tell the difference between light and dark shades), they keep the number of series to a minimum, and they encourage us to explore using pattern fills or other techniques. To check my interactives, I use Color Oracle, a free program that simulates different kinds of color blindness. There are also built-in equivalents on both major mobile operating systems.

Finally, in a post-Snowfall age, it’s popular to have big animated transitions on our pages. I’m as guilty of this as anyone (or even more so). But keep in mind that for many users, these big, complex motions may trigger vertigo or other vestibular disorders, or just make readers feel a little queasy. If possible, check for the prefers-reduced-motion media query (currently supported in Safari), or offer a checkbox that disables/shortens large-scale animations for users.

A case study: coal strip mining

Let’s take a look at a recent Seattle Times graphic to see what kind of effort is required for an accessible image. This embedded graphic walks readers through the process of mining and burning coal for power in Washington State, using a pan/zoom SVG view and a “stepper”-style interface. We would normally do this kind of graphic as a scroll-triggered effect (which would be more accessible, since then it’s just a document with a background image), but embedded pages don’t have easy access to scroll position, so I’ve gone with a more manual process.

Luckily, our graphic is already relatively high-contrast, and our text is simple black-on-white. We also don’t have to worry about color-blindness, since the palette isn’t particularly important. We can adapt it for people with reduced motion preferences (on supported platforms) by adding a quick media query check in our JavaScript, and reducing the animation timing accordingly:

var timing = window.matchesMedia("(prefers-reduced-motion)").matches ? 10 : 1000;

We’re not using any custom controls, but the buttons for the next and previous slide are actual <button> elements, and our code sets the disabled attribute at the first and last slide. That means that they’ll be skipped in keyboard navigation but announced in screen readers (I’m hiding them using opacity: 0 to keep them available to accessibility tech, on the principle that it’s bad practice for UI to appear and disappear unpredictably).

Finally, we need to make sure that screen readers handle the image and interaction correctly. I’m loading the SVG content using a custom element, so I’ve added role="img" to that loader, along with alt text that describes the purpose of the image (“A landscape view of the Colstrip plant”).

Our buttons are already accessible, but how do we make sure that the screen reader is updated when the text changes, especially when there’s a little slide transition included? The answer is to use the “alert” role, which causes the content to be read immediately in response to UI interaction. We set the role on the new text, and remove it from the div that contains the previous caption:

incomingCaption.setAttribute("role", "alert");

This usage is not perfect on all platforms–Talkback on Android announces the alerts twice, for some reason–but it means that when the reader triggers the buttons for a step transition, it will immediately read the caption. Blind users can now use the interactive in pretty much the same way as sighted users: they won’t see the animated transition, but they’re capable of stepping through the captions and hearing each one in turn, and they don’t get lost in the SVG document tree.

Converting this graphic to be accessible did not require a huge amount of effort. There’s some up-front cost involved in learning how to set up screen readers and other tools, but that’s only paid once and will be amortized over the lifetime of any other accessible graphics I create. For the benefits to our community and readers, this is certainly a cost I’m willing to pay.