CategoryJavaScript

Visualization as a civil right: how to create accessible news apps and interactives

As accessibility advocate Marcy Sutton is fond of saying, JavaScript is a civil rights issue. That’s true whether you’re building the next big web app or a dynamic chart for an article: we want our journalism to be available not just to sighted and able-bodied readers, but to everyone. Yet as interactive developers, it’s often easy to forget about accessibility–and easier still to think that the “visual” part of data visualization means it’s only for sighted or able-bodied users.

Over the last couple of years, I’ve been experimenting with accessibility and assistive technologies for interactive projects here at the Seattle Times. From that experience, I’ve come to believe that it’s not actually that hard to make sure your news apps are accessible. It’s the least we can do, if we’re serious about our commitment to reaching a wider, more diverse audience. Moreover, as Target discovered when sued by the National Federation for the Blind, inaccessible sites may violate state or federal regulations.

The following post contains a few of the lessons I’ve learned, and hopefully serves as a good starting point if you’re just getting started with accessibility on the web. I want to stress that I’m not an expert: there are plenty of other resources that you should pursue if you find this information useful, such as the Web Accessibility Initiative. If I’ve made any mistakes here, or if you’d like to share additional posts on newsroom accessibility, please let me know!

What is accessibility?

At heart, accessible design means that we want our pages to be friendly to readers who may face sensory, motor, or cognitive barriers to access. We typically think about this in terms of extremes, such as blind or paralyzed users. However, as Microsoft’s Inclusive Design team states, disability is a spectrum that almost all of us will fall under at some point due to age, injury, or context.

I like the idea of accessibility as “inclusive” design, because creating more accessible products tends to create better experiences for everyone. Closed captions may have originally been created for deaf users, but they’re useful for many people, including those who need to watch video quietly or who are learning to read. One-handed controls are appreciated by people with reduced dexterity, but also new parents and bus riders. Improved contrast benefits users with reduced vision, but it also makes a cleaner and easier-to-read UI for everyone.

For the purposes of this article, we’ll sort accessibility into three buckets:

  • Making interactive content work for blind users via a screen reader
  • Providing support for keyboard navigation
  • Ensuring sufficient contrast for low-visibility and color-blind users

Enabling and using a screen reader

Perhaps the easiest way to experiment with accessibility is by using a screen reader on your phone, since both major mobile operating systems come with voice assistance built-in. Rob Dodson from Google has some short videos that detail how to activate and use Talkback on Android and Voiceover on iOS. Once activated for the first time, you can use a shortcut to toggle the reader on and off at will, which is useful for quick testing.

Even as a sighted person, using a screen reader on a phone can be a handy skill to have: I like to use Talkback to browse on the bus or when it’s cold out, so that I can operate the phone without removing it from my coat pocket. Spending just one day a month trying some of your regular reading through the screen reader will give you a new appreciation for how difficult news sites are to navigate for blind users.

On desktops, the situation varies a bit more. All OS X machines also have access to VoiceOver, which can be activated through the System Preferences app. Windows users may use one of several programs, such as JAWS or NVDA. I tend to use the latter, which is free and open-source, since JAWS is extremely expensive.

Most desktop screen readers can be configured to show the voiced text onscreen in addition to being read, which is useful if you don’t want to wear headphones at work. In fact, one of my favorite features of NVDA is the Speech Viewer, which will log out all spoken text (and, if configured properly, will do so silently). Being able to instantly read and look back through the output from the screen reader can dramatically speed up development.

A screenshot of the NVDA speech viewer window.

Turns out it’s really hard to grab a Speech Viewer image without also leaking what you’re working on to the world.

Easy wins for screen readers

Once you have a screen reader up and running, you’ll notice that the way it handles pages is highly dependent on the document structure. This is part of why it’s important for our HTML documents to be well-formed: by using semantic tags, the browser is able to correctly announce headers, sections, asides, navigation, and other features.

Of course, if we were just building documents, our jobs would be relatively easy. Creating applications that correctly announce themselves via screen reader is a little bit more difficult, but there are some easy wins that we can get by working with the platform:

  • Don’t forget to add alt text to images whenever you can, and avoid the title attribute for any required content. Alt text should describe an image conversationally, not just restate the caption or data directly (e.g., “A chart showing an upward trend in underwater basket weaving classes.”). If an image is purely cosmetic, hide it from readers by setting alt="".
  • Unless you actually mean to link to another page or part of a document, use <button> for interactive elements instead of <a> (or even worse, <div>, which readers will simply skip). You can style buttons to look any way you want, but the screen reader will still understand that they’re controls that do something locally, instead of a link to content elsewhere.
  • In fact, you should use native controls (like <select> or <input type="x">) whenever possible. The built-in elements are well-tested, are much more compatible with screen readers, they handle keyboard input more reliably, and they support features like the disabled state, which will be announced properly. It’s almost never worth giving all that up just for a bit of styling support.

As interactive news developers, however, part of the job is building dynamic visual experiences that don’t already have convenient semantic tags. In these cases, the role attribute is your friend, as it allows you to specify the spoken meaning of a block of DOM. Setting role="img", for example, will tell screen readers that the marked tag contains primarily visual content, and shouldn’t be read as literal HTML. This is extremely useful for charts that would be difficult to describe (just remember to add an alt attribute describing what readers would see).

You can also use roles to force an immediate announcement, although you should use this rarely. By setting role="alert" on an element, the browser will immediately read its contents to the user. For less-urgent announcements, you can set the aria-live attribute, which will ask the browser to monitor an element for changes and announce them when it won’t be disruptive. ARIA (Accessible Rich Internet Applications) attributes vary in support, but it’s worth reading up on their use if you want to work well with screen readers.

Finally, a staple of designing news apps is being able to show and hide content. It’s important to be careful with how you do this, since content that’s not “visible” may still be “readable.” Long story short, keep it simple: set display: none or hidden="true" instead of using absolute positioning or opacity: 0. Conversely, if you want something to be visually hidden but spoken aloud (such as the text version of a graphical headline), using opacity: 0 or a very small clip rectangle is good practice.

Keyboard navigation

Here’s another experiment you can try: browse your favorite web page without a mouse. Your tools are the tab key (which will cycle between all links and inputs on a page), the arrows (which scroll and interact with form controls) and the enter key (for “clicking”). How tough is it to navigate your news site this way?

Readers with motor dexterity issues may be navigating your site with a clicker switch or a joystick. Those with RSI may be restricted to the keyboard. Even mouse users often use the keyboard for operating select boxes or tabbing through form fields. If you go with the grain of the platform, writing semantic HTML and using mostly built-in widgets, you’ll get keyboard accessibility for these users for free. Problems usually crop up when we start to roll our own solutions.

Here are a few easy wins to keep in mind:

  • Don’t disable focus rectangles with styles like input:focus { outline: none }. Without a visual indicator of where the current keyboard focus is, users will be completely unable to track which item is active when pressing the tab key.
  • As with site readers, use <button> instead of <a>, and try not to use <div> for interactive elements.
  • If you must use a non-semantic element for interaction, be sure to set tabindex="0" to enable interaction in “source order” (i.e., the intuitive top-to-bottom order of elements). Likewise, if you cannot use the disabled attribute, be sure to set tabindex="-1" to disable keyboard navigation on controls that can’t be pressed.
  • If you absolutely must design custom controls, do they match the keyboard behavior of existing controls? For example, the arrow keys should move between radio buttons in a group and options in a select box, and tab should change focus between controls correctly.

Visual affordances

Finally, when designing graphics we need to bear in mind that not all sighted users are created equal. Vision can vary widely, not just by disability but also by age and by context. When designing inclusively, we may want to start with contrast, color blindness, and motion sensitivity.

Contrast is simply about making sure that there’s enough difference between background and foreground elements to easily read page content. The WAI recommends a contrast ratio of 4.5:1 for normal text, but how do we work out the ratio? It turns out you shouldn’t need to: Chrome’s dev tools include an accessibility audit that will check the color of all text against its background, and warn you of any elements that do not offer sufficient contrast. The axe-core extension will also check for contrast, among other issues.

Color blindness is another pesky issue, especially when creating graphs or charts. The good news, in my opinion, is that graphics friendly to color-blindness tend to be better for everyone: they have better contrast in their palettes (so that users can tell the difference between light and dark shades), they keep the number of series to a minimum, and they encourage us to explore using pattern fills or other techniques. To check my interactives, I use Color Oracle, a free program that simulates different kinds of color blindness. There are also built-in equivalents on both major mobile operating systems.

Finally, in a post-Snowfall age, it’s popular to have big animated transitions on our pages. I’m as guilty of this as anyone (or even more so). But keep in mind that for many users, these big, complex motions may trigger vertigo or other vestibular disorders, or just make readers feel a little queasy. If possible, check for the prefers-reduced-motion media query (currently supported in Safari), or offer a checkbox that disables/shortens large-scale animations for users.

A case study: coal strip mining

Let’s take a look at a recent Seattle Times graphic to see what kind of effort is required for an accessible image. This embedded graphic walks readers through the process of mining and burning coal for power in Washington State, using a pan/zoom SVG view and a “stepper”-style interface. We would normally do this kind of graphic as a scroll-triggered effect (which would be more accessible, since then it’s just a document with a background image), but embedded pages don’t have easy access to scroll position, so I’ve gone with a more manual process.

Luckily, our graphic is already relatively high-contrast, and our text is simple black-on-white. We also don’t have to worry about color-blindness, since the palette isn’t particularly important. We can adapt it for people with reduced motion preferences (on supported platforms) by adding a quick media query check in our JavaScript, and reducing the animation timing accordingly:

var timing = window.matchesMedia("(prefers-reduced-motion)").matches ? 10 : 1000;

We’re not using any custom controls, but the buttons for the next and previous slide are actual <button> elements, and our code sets the disabled attribute at the first and last slide. That means that they’ll be skipped in keyboard navigation but announced in screen readers (I’m hiding them using opacity: 0 to keep them available to accessibility tech, on the principle that it’s bad practice for UI to appear and disappear unpredictably).

Finally, we need to make sure that screen readers handle the image and interaction correctly. I’m loading the SVG content using a custom element, so I’ve added role="img" to that loader, along with alt text that describes the purpose of the image (“A landscape view of the Colstrip plant”).

Our buttons are already accessible, but how do we make sure that the screen reader is updated when the text changes, especially when there’s a little slide transition included? The answer is to use the “alert” role, which causes the content to be read immediately in response to UI interaction. We set the role on the new text, and remove it from the div that contains the previous caption:

outgoingCaption.removeAttribute("role");
incomingCaption.setAttribute("role", "alert");

This usage is not perfect on all platforms–Talkback on Android announces the alerts twice, for some reason–but it means that when the reader triggers the buttons for a step transition, it will immediately read the caption. Blind users can now use the interactive in pretty much the same way as sighted users: they won’t see the animated transition, but they’re capable of stepping through the captions and hearing each one in turn, and they don’t get lost in the SVG document tree.

Converting this graphic to be accessible did not require a huge amount of effort. There’s some up-front cost involved in learning how to set up screen readers and other tools, but that’s only paid once and will be amortized over the lifetime of any other accessible graphics I create. For the benefits to our community and readers, this is certainly a cost I’m willing to pay.

How we adapted the LA Times map maker

When news breaks, The Seattle Times faces a common problem: we want visual elements for the story, but photographers may not be on the scene yet. For this, we’ve often turned to maps from the graphics department, but these are labor-intensive to build, even with our fantastic crew of news artists. Services to generate maps are expensive. What’s a small, penny-pinching newsroom to do?

My first attempt at solving this problem was a simple Leaflet-based app. The styling wasn’t great, and due to browser security restrictions, people still had to manually screenshot the window in order to grab an image. It was, in short, a total hack. That’s why I was thrilled to discover the work that the LA Times Data Desk has been doing with their Web Map Maker project. Using OpenStreetMap data and vector tiles from MapZen, their tool is capable of rendering directly to a canvas, and then saving that image–no screenshot required!

I forked the project pretty much right away, and started messing around with it. Eventually, an evening in Sublime turned into a substantial rewrite, with two main goals: to match Seattle Times code style, and to simplify it by taking advantage of new browser features. In this post, I’d like to walk through some of the changes that we made–not as a criticism of the LA Times’ work, which I’m still in awe of, but as a way to explore how modern JavaScript can make a real-world application simpler and easier to maintain.

async/await

The first thing I noticed about the web map maker was the downloadIMG() function, which does the actual image capture. At more than 100 lines, it’s a monster, but a necessary one: it combines canvas rendering for the base map, html2canvas to grab popups and other map elements, and a chunk of custom code to draw any SVG elements loaded from GeoJSON. Web maps are complicated!

Compounding this problem is that, like a lot of legacy JavaScript, the code is callback-heavy. downloadIMG() becomes more and more indented as it moves through various stages, which is hard to maintain. Untangling this rendering process made a lot of sense as a starting point for the refactor, and using async/await was a natural method for taming those wild callbacks.

The async and await keywords are new to ES2017, but you can use them in evergreen browsers like Chrome and Firefox now. They’re syntactic sugar for the then() method of JavaScript Promises: inside of a function marked with async, you can “pause” the function using await instead of passing a callback to then(). For example, this:

var asyncOperation = functionThatReturnsAPromise();
asyncOperation.then(function(result) {
  return doSomeWork(result);
}).then(function(nextResult) {
  return doMoreAsyncWork(nextResult);
});

can be simplified into this:

var firstResult = await functionThatReturnsAPromise();
var secondResult = await doSomeWork(firstResult);
await doMoreAsyncWork(secondResult);

At each await, the browser converts your function into a callback for you, even in the middle of an expression or a variable assignment. The resulting code no longer requires all the function boilerplate and extra indentation. It can be “read” as synchronous, even though in reality it might take significant time to execute.

Breaking downloadIMG() into several promise-based functions made it much easier to understand and debug, since skipping a step became as simple as commenting out an await-ed function call. The new code has actually gotten a little longer after being broken up into sub-tasks, but the readability is much higher now. I was also able to move it out into its own module, which strengthens the distinction between logic and UI in the app structure.

No jQuery

jQuery is great, but at the Times we have a policy of avoiding it unless we need at least three of its hard-to-shim features, like JSONP requests or event delegation. Modern browser APIs have learned a lot from jQuery, and integrated many of its features, to the point where you might not need its help.

The original map maker code used jQuery extensively, but most of the cleanup was pretty straightforward:

  • perform DOM queries with a call to document.querySelectorAll(), aliased to $
  • use classList instead of addClass() and removeClass() calls
  • addEventListener() instead of on()

A common pattern of jQuery is that it acts on all elements in a selected query, whether there’s one or more (or even zero). ES2015’s arrow functions aren’t quite that convenient, but they do provide a close analog:

// jQuery version
$(".toggle-element").addClass("enabled");

// qsa() returns an array from document.querySelectorAll
qsa(".toggle-element").forEach(el => el.classList.add("enabled"));

It’s a little more clumsy, but it saves us the cost of loading 32KB of code for jQuery, which contributes to faster page load. jQueryUI adds another 55KB, most of which is unused–the project only needs the resize functionality for the map’s drag handle. I write a small vanilla JS module to do resizing instead, dispatching a custom DOM event whenever the element’s dimensions changed so that we could continue listening in the main module for changes.

Eliminating ~90KB of code may not seem like a lot, but on lower-powered devices, that can shave ~90ms off the page load just from the cost of code parsing and module execution. It also means we no longer load the CSS required for jQueryUI (which, ironically, the original app did not use anywhere). And speaking of CSS…

Flexbox instead of Bootstrap

This is another technique that’s becoming conventional wisdom as browsers improve: you might not need Bootstrap! The Flexbox specification for CSS lets us define a page in terms of vertical or horizontal “flex containers,” and force the children of those containers to stretch or align on either axis (finally, easier vertical centering is here). The original map maker used Bootstrap for some button group styles, but otherwise didn’t really rely on it. Swapping Flexbox in for positioning, and styling those buttons manually, turned out to be a pretty easy fix.

Destructuring and for/of loops

Finally, although it’s not a big change, I did take advantage of destructuring assignment throughout the app. For example, when pulling the width and height out of the resizer’s event:

var { width, height } = event.detail;

Or even when importing modules from my async-based loader:

var [ qsa, resizer, render ] = await install.batch("qsa", "resizer", "render");

And when combined with the new value-oriented loops and iterables in ES2015, it eliminates some boilerplate from checking the current dimensions against existing size presets:

for (var [w, h] of Object.values(sizePresets)) {
  if (w == width && h == height) custom = false;
}

This kind of unpacking behavior is common in Python or PHP, so it’s a relief to finally have it in JavaScript. It reduces the assignment code you need to write, and makes it clearer which properties you’re going to use from a given object. In some cases, it can also make tight loops faster, since local variables are quicker than properties accessed through the dot operator.

Final thoughts

Internal tools like this are a great chance to play around with new browser features, because we can set stronger minimum requirements for users. However, you can also publish code that uses many of these new JavaScript toys if you’re willing to set up a transpiler like Babel in your build process. We’ve integrated transpilation as a part of our standard news app template at the Times, and it has made a lot of our code much more pleasant to read and write. I highly recommend giving it a shot.

I’d like to close by again noting how incredibly grateful I am that the LA Times decided to open-source their tools, and that LAT map guru Jon Schleuss was so gracious about sharing with us. We’re actually still working on the project styles, and hoping to use it in production soon. You can check out the repo and watch our progress here.

Unfortunately, it’s likely that I’ve refactored so much that it’ll be difficult to pull changes back into their master branch. I hope that by spelling out the changes in this post, it’ll be easier to figure out which parts are most useful, and which ones are just me showing off. Projects like this really drive home just how important open-source development can be for the news nerd community, since we’re much stronger as a community than we are as individual newsrooms.

Roaring to life: How we built our Elwha watershed package

When Lynda Mapes, our environment reporter, came to us with the story of how the Elwha watershed had been recovering after the world’s largest dam removal, we knew we wanted to present it in a way that would make a splash. The final result (GitHub repo) drew heavily on talents from all over the newsroom, ranging from the graphics team’s hand-drawn art to Lynda’s beautiful prose. On the digital side, there were three interesting JavaScript components that deserve more explanation: the watercolor animations, scroll-triggered effects, and FLIP transitions.

You otter be in pictures

Black and white river otter sketch

Black and white base layer

Full color otter sketch

Color “paint” layer

We knew going into this project that we would want to feature the beautiful hand-drawn images of various critters in the Elwha watershed. The final technique is an adaptation of the “lead dust” effect that we used in our Loaded with Lead investigative report. Instead of a single texture that’s used to “reveal” hidden text, we split the sketches into two layers: one base image that only contained the black channel for shading, and another full-color image that gets painted on top. By using intentionally-imperfect circles as our “brushes,” and expanding these circles from randomized positions, the resulting “watercolor” effect helps add a simple splash of life to the static images, without feeling gratuitous or jarring.

Ultimately, what makes this effect possible (and performant) on a range of devices is the canvas createPattern() method, which takes an image and generates a fill pattern object. Most of the time, this is used with smaller images to add texture to a filled path, so that you don’t need to draw and clip the pattern by hand. But in this case, the pattern is the same size as the canvas itself, meaning that we can use this to copy pieces of one image into the canvas in irregular patches–perfect for our watercolor wash. (source code)

Scroll me maybe

Another key part of the presentation is that many animations trigger as they become visible, rewarding readers for scrolling down the page. This is a pretty typical part of news interactive UI these days, but it’s worth looking at how to do it without causing jank.

Our secret weapon for many scroll-triggered effects at the Seattle Times is getBoundingClientRect(), one of the many useful DOM methods that (surprisingly) originated in Internet Explorer. When called on any element, it returns an object with the coordinates for that element relative to the viewport. In other words, it tells you where the object is displayed right now, without having to walk up the DOM for offset values or check the current scroll position. This leads to easy visibility tests like this one:

var bounds = element.getBoundingClientRect();
if (bounds.top < window.innerHeight && bounds.bottom > 0) {
  //element is onscreen in some form
}

getBoundingClientRect() is fast and easy to use for a variety of purposes, and it’s well-supported everywhere. While browsers continue to work to make scrolling smoother, it’s still important to do as little work on scroll listeners as possible, especially when the page may contain many of them (as the Elwha package does). We created a unified scroll notification queue for this project to try to keep the load down. Which brings us to our last animation feature…

FLIP and forget it

I’m a big fan of Paul Lewis’ FLIP animation technique, which leverages the browser to perform smooth animations with as little JavaScript as possible. It’s worth reading the article for more details, but the acronym spells out the basic steps: get the first position, move to the last position, invert the difference through the use of transforms, and then play by removing the offsets.

Elwha feature section

One of our animated feature sections. The animals on the right swap into the main section with a smooth expand effect when clicked.

The nice thing about FLIP is that it’s not only buttery-smooth, but it also adapts well to various lower-level page changes. In this case, when a user clicks on one of the sidebar items to swap between featured animals, we actually move the entire block between two different container elements for styling purposes. But because FLIP works entirely via on-screen coordinates and transforms, none of that  matters. We only have to know the first and last positions to create the transform, and those are easy to get via our old pal getBoundingClientRect().

Although the code for FLIP isn’t hard to write, it comes up often enough (see also: EB-5 visas, teacher pay, and modern dating) that we’ve added a small module to our news app template to make it even easier. The function takes two arguments: an element that you want to animate, and a callback that’s executed between measurements. Just make whatever changes you want during the callback, and the module will figure out the difference and apply transforms to blend between the two. Feel free to take and adapt for your own purposes!

Introducing our news app template

Hi, I’m Thomas Wilburn, newsroom web developer here at The Seattle Times. I work with editors and reporters to tell stories on the web, ranging from data visualizations to custom news applications. One of my most important tools for putting out great projects under deadline pressure is our news app template.

Digital storytelling is not new at The Seattle Times — you only have to look to Sea Change or our election guides to see that — but it hasn’t had a consistent process for development. Some of our news apps were built in Django, some in WordPress, and others in notepad.exe, depending on the staff assigned and their mood at the time. When I joined the newsroom earlier this year, one of my goals was to create a standard platform for digital projects, generating static files for ease of maintenance and low-stress hosting.

The result is a scaffolding system built on top of Grunt and NodeJS for producing news applications with the absolute minimum of friction (editorial or technical). In addition to populating a project with boilerplate HTML, the project initialization process also sets up the following helpful features:

  • A local development server, with live reload integration
  • Lo-dash templates, with some useful built-in helper functions
  • LESS and JavaScript compilation via AMD modules
  • A “watch” server to run build tasks automatically whenever files change
  • Bower pre-configured to install client-side libraries in a standard location
  • The ability to pull data from CSV, JSON, or Google Docs
  • One-command publication to Amazon S3

In many ways, this list is similar to scaffolding solutions from other organizations, including the NPR app template and the Chicago Tribune’s Tarbell. However, being built on NodeJS, the Seattle Times template is a bit easier to set up, and runs on more diverse software (namely, Windows). As a result, it’s been easy to get our web producers working on the same stack that we use for our big projects.

Our experiences using this app scaffolding have been positive so far. Using this scaffolding, we can be up and running on a new project in minutes, and the common structure means that it’s easy to reuse components from one app to another. Fast deployment makes our development faster, and being able to pull from Google Docs makes it easier to bring in editors and reporters. If they can use a spreadsheet, they can edit our interactives. We’ve used it to power many of our online features this year, including “Where the Bidding Wars Are?” and our Oso Landslide timeline. It even runs our Seahawks fan map!

As big proponents of open-source software, our team believes this kind of slick development experience is just too cool to keep to ourselves. So we’ve made our scaffolding available on GitHub under a GPL license. There are a few Seattle Times-specific bits you’ll need to adapt if you use it for yourself, such as our ad and tracking code. But other than that, I think it could be useful for anyone building static sites — inside or outside of a newspaper. If you build something with it, we’d love to hear about it!