CategoryGeneral

Farewell, Audrey!

I’m proud to announce that Audrey Carlsen is headed to the New York Times to join their graphics team. I’m saddened as well, of course: Audrey has been a valuable member of the Interactives Team here at The Seattle Times for two years now. She’s responsible for many of our most popular and innovative projects, including the Seahawks personality quiz, the animated history of Ravensdale, and (last, but certainly not least) Under our skin. She also lent her reporting talents to the paper, with pieces on growing up multiracial in Seattle and the “blue wall” of Washington politics.

Like a lot of data journalists, Audrey came to the team from a non-traditional background, without formal degrees in journalism or computer science. She flirted with a career in medicine, turned her hand to science blogging for public radio, then returned to Seattle to pick up web development skills at the Ada Developer Academy bootcamp. A loss for medicine was ultimately our gain, as she hit the ground running here at the Times and never looked back.

Audrey created not only a collection of great digital journalism, but also activism for a more inclusive, community-oriented newsroom. She has been a champion for diversity in our coverage, and a thoughtful listener for concerns both inside and outside of the Times. As a member of the Asian American Journalists Association, she helped assemble statistics on newsroom diversity across the Seattle area. Instead of taking her job for granted, Audrey used that opportunity to advocate for less-attended voices. I have been, and will continue to be, inspired by her example.

But enough of the grand plaudits and somber encomoniums: Audrey has never given up hope for an office corgi. She has a big laugh you can hear clear across the room, and is aggressively unamused by regular expression syntax. She makes a hell of an ambrosia, and she always added the necessary padding to my CSS. She’ll be missed, but great things are no doubt in store from her. I hope you’ll join me in thanking Audrey for her many contributions to The Seattle Times, and wishing her luck in the Big Apple.

Play her off, Keyboard Cat.

Roaring to life: How we built our Elwha watershed package

When Lynda Mapes, our environment reporter, came to us with the story of how the Elwha watershed had been recovering after the world’s largest dam removal, we knew we wanted to present it in a way that would make a splash. The final result (GitHub repo) drew heavily on talents from all over the newsroom, ranging from the graphics team’s hand-drawn art to Lynda’s beautiful prose. On the digital side, there were three interesting JavaScript components that deserve more explanation: the watercolor animations, scroll-triggered effects, and FLIP transitions.

You otter be in pictures

Black and white river otter sketch

Black and white base layer

Full color otter sketch

Color “paint” layer

We knew going into this project that we would want to feature the beautiful hand-drawn images of various critters in the Elwha watershed. The final technique is an adaptation of the “lead dust” effect that we used in our Loaded with Lead investigative report. Instead of a single texture that’s used to “reveal” hidden text, we split the sketches into two layers: one base image that only contained the black channel for shading, and another full-color image that gets painted on top. By using intentionally-imperfect circles as our “brushes,” and expanding these circles from randomized positions, the resulting “watercolor” effect helps add a simple splash of life to the static images, without feeling gratuitous or jarring.

Ultimately, what makes this effect possible (and performant) on a range of devices is the canvas createPattern() method, which takes an image and generates a fill pattern object. Most of the time, this is used with smaller images to add texture to a filled path, so that you don’t need to draw and clip the pattern by hand. But in this case, the pattern is the same size as the canvas itself, meaning that we can use this to copy pieces of one image into the canvas in irregular patches–perfect for our watercolor wash. (source code)

Scroll me maybe

Another key part of the presentation is that many animations trigger as they become visible, rewarding readers for scrolling down the page. This is a pretty typical part of news interactive UI these days, but it’s worth looking at how to do it without causing jank.

Our secret weapon for many scroll-triggered effects at the Seattle Times is getBoundingClientRect(), one of the many useful DOM methods that (surprisingly) originated in Internet Explorer. When called on any element, it returns an object with the coordinates for that element relative to the viewport. In other words, it tells you where the object is displayed right now, without having to walk up the DOM for offset values or check the current scroll position. This leads to easy visibility tests like this one:

var bounds = element.getBoundingClientRect();
if (bounds.top < window.innerHeight && bounds.bottom > 0) {
  //element is onscreen in some form
}

getBoundingClientRect() is fast and easy to use for a variety of purposes, and it’s well-supported everywhere. While browsers continue to work to make scrolling smoother, it’s still important to do as little work on scroll listeners as possible, especially when the page may contain many of them (as the Elwha package does). We created a unified scroll notification queue for this project to try to keep the load down. Which brings us to our last animation feature…

FLIP and forget it

I’m a big fan of Paul Lewis’ FLIP animation technique, which leverages the browser to perform smooth animations with as little JavaScript as possible. It’s worth reading the article for more details, but the acronym spells out the basic steps: get the first position, move to the last position, invert the difference through the use of transforms, and then play by removing the offsets.

Elwha feature section

One of our animated feature sections. The animals on the right swap into the main section with a smooth expand effect when clicked.

The nice thing about FLIP is that it’s not only buttery-smooth, but it also adapts well to various lower-level page changes. In this case, when a user clicks on one of the sidebar items to swap between featured animals, we actually move the entire block between two different container elements for styling purposes. But because FLIP works entirely via on-screen coordinates and transforms, none of that  matters. We only have to know the first and last positions to create the transform, and those are easy to get via our old pal getBoundingClientRect().

Although the code for FLIP isn’t hard to write, it comes up often enough (see also: EB-5 visas, teacher pay, and modern dating) that we’ve added a small module to our news app template to make it even easier. The function takes two arguments: an element that you want to animate, and a callback that’s executed between measurements. Just make whatever changes you want during the callback, and the module will figure out the difference and apply transforms to blend between the two. Feel free to take and adapt for your own purposes!

May day mayhem: how we covered Seattle’s protests

Although I wasn’t here in Seattle for the 1999 globalization protests, or the 2012 vandalism, it’s hard to miss the city’s regular May Day “festivities.” About a week ago, the digital team started thinking about how to track and collect information from reporters as they followed activities in the city. A map is a natural way to collect this data, but I didn’t want to manually manage geotagging and attaching media. Was there some way to pull this information directly from social networks?

My first stop was NPM to see if there was a module for Twitter streams, which (as it happens), there was. Hooking into the streaming API gave us real-time updates with a variety of filters available, including geotagged location and text search, while using the regular REST API let us backfill for tweets we missed and look up users for more information.

We’re wary about relying on user content anyway, but it quickly became clear that most people don’t enable location services on Twitter, and even fewer use the “precise” option, which must be re-enabled for each tweet. Clearly, we couldn’t just rely on the general population to feed our map. Our new plan was to teach reporters and photographers how to tag their locations, and then follow a real-time stream of whitelisted accounts.

I quickly put together a map-and-stream prototype to demonstrate that the concept was feasible, and during our planning meeting I fired off a quick post from a temporary test account. Luckily, the process worked, and immediately a dot appeared on the map with this picture of our charismatic digital and photo staffers:

One advantage we had in quickly assembling this map was our leaflet-map component, which assembles a map from a domain-specific set of custom elements. Because it handles setting up the Leaflet instance and populating it from markup, I could output the initial state of the map straight from our HTML templates (including zoom, position, tile layers, and map markers) instead of having to write error-prone initialization code. The same JSON file that feeds that template is also uploaded to S3, and client-side code checks it every 15 seconds for updates, so readers don’t have to refresh the page. In fact, the interactive team watched the march’s progress all day, just by leaving the page open.

It was important for readers to be able to instantly see where the action currently was, and how the marches moved across the city. We took advantage of the fact that we run JavaScript on both the client and server to share the aging algorithm between the two, and then set a class on map markers with the result. Newer tweets are more saturated, and those with video got a different color to differentiate them from text and images.

If you’re interested in putting together your own Twitter map, feel free to check out the source code for the map on GitHub. It’s based on our news app template, but with an additional index.js script that connects to Twitter and re-runs the build whenever there are updates. We also use the sqlite3 module to store tweets between server restarts, which is something we haven’t done before. Integrating a database works well, but locks us to a single machine for deployments.

While I’m mostly happy with our visualization, I think there are many improvements we can make, and a project like this is a great way of learning where we need to improve the leaflet-map map element and other in-house libraries. For example, I want to be able to link tweets in the stream with dots on the map, which requires the ability to set attributes on the markers beyond class names. I’d also like to be able to filter both dots and stream by reporter and time, so that readers can “replay” events for a better sense for their movement over time.

Another limitation that we didn’t have time to work around is the problem of Twitter video, which the service keeps deeply inside of its walled garden. Although video has been embeddable on the service for most of this year, it’s still not exposed via the API (it sends down a thumbnail picture instead, and there’s no indication that it’s different from any other photo). We worked around this problem by manually tagging tweeets with #video, coloring those dots differently, and then linking the thumbnail directly to the tweet, but it’s an inelegant solution at best.

Still, I’m very pleased with how the map turned out, especially given that we put it together in only about two days — that’s coding under pressure! More importantly, it wouldn’t have been possible to do, or nearly as amazing to watch, without the fantastic work of the reporters and photographers who fed it with their coverage of the event, risking police flashbangs, teargas, and pepper spray to do so. We’re incredibly thankful to @seattlesullivan, @pgcornwell, @seaclarridge, @stevemiletich, @stimesmcarter, @katherinelong, @evanbush, @eriklacitis, @jbroomseattle, @mikelindblom, @jim_brunner, @deleon_times, @gina_cole_, @sjgtimes, @sringman, @deanrutz, @bettinahansen, @lindseywasson, @laurenfrohne, @kgcotterill, @corinnechin, @gawlowski, and @colindilt for all their help.

Fifty shades of purple: mapping Seattle’s city council districts

One of the most important lessons I’ve learned as a news developer is that there’s no right way to build a data visualization. But there are plenty of wrong ways.

Reporters frequently come to us with story ideas and actual datasets, if we’re lucky. It’s usually up to us to figure out how to best visualize the data, and the design process often continues long after our development work has begun.

Such was the case for our recent Seattle city council district interactive. The project featured a choropleth map inviting readers to compare the city’s seven newly-created districts across a variety of demographic measures, including age, income, and ethnicity.

The initial map was fairly easy to set up. For each demographic category, we colored the districts on a single-hue progression ranging from white (indicating that zero percent of the district’s population fell into that category) to purple (representing the maximum percentage, i.e. the district with the highest number of residents falling into that category). Most districts ended up somewhere in the middle, as in these views of the racial distribution of Seattle’s white (left) and black (right) populations:

Distribution of Seattle's white population          Distribution of Seattle's black population

These maps succeeded in providing some insight into the city’s racial makeup. It was clear, for instance, that District 2 (Rainier Valley) housed the majority of Seattle’s black population, whereas the rest of the city was consistently white.

There’s a limit to what differences the human eye can distinguish, however, and we were concerned that we were drowning out our data in a sea of subtly different shades of purple. We experimented with bumping up the contrast — first by intensifying the saturation of the maximum value (left), and then by decreasing the saturation of the minimum value (right):

Distribution of Seattle's white population          Distribution of Seattle's white population

Neither of these solutions left us happy. Both seemed to be misrepresenting the data, either by overplaying larger numbers or underplaying smaller, but not insignificant, ones.

In retrospect, the main problem with our initial approach was that it limited us to a single-hue spectrum defined by absolute maximum and minimum values. We had to apply the same color rules to multiple demographic views, and that meant that small variances between districts (i.e. number of people who bike to work, ranging from 2 percent to 6 percent) appeared blown out of proportion, while larger variances appeared washed out.

We decided to return to our original color progression, with the addition of a legend and some new styling:

Distribution of Seattle's white population          Distribution of Seattle's black population

We were still concerned that readers were going to have to work too hard to translate the map’s colors into numbers. It was easy enough, for example, to see that District 2 had the lowest number of white people (31 percent of the district’s population), but it was much more difficult to see that District 6 (Ballard) had the highest percentage (83 percent) — 16 percent higher than the citywide average.

In the week before publication, we buckled down and made a series of significant changes. The result was a map based on a two-tone color progression, indicating how each district stacked up to the citywide average for each demographic. We also switched out the legend boxes with a gradient scale to make it more readable:          

Distribution of Seattle's white population          Distribution of Seattle's black population

This new approach addressed several of our concerns. Switching to a two-tone system made it much easier to identify small differences that fell just above or below average. Additionally, by centering the progression around an average value rather than scaling it from an absolute maximum, we were able to provide a more accessible, at-a-glance view of what the city looked like as a whole.

Reporters, editors, and developers all put their heads together to work out the best presentation for this map, and the final form didn’t materialize until fairly late in the process. Our efforts were well worth it. Fielding criticisms and suggestions at each stage of the design process allowed us to identify and slowly chip away at discrete problems, and resulted in a product that everyone was satisfied with.

Tag Soup

Over at Source, the OpenNews features blog, we’ve got a piece up today on how we used custom elements in our election coverage, as well as our most recent investigative piece. Check out Tag Soup for more information and plenty of links!