CategoryEnvironment

Wolves, WSU, and WebGL: presenting the Profanity Peak pack

Whenever Lynda Mapes comes to the visuals department with a story idea, we sit up and take notice. So when she dropped by a few months back to talk about a story she was working on, about the conflict between wolves and cattle ranchers in Eastern Washington, we immediately started thinking about how to bring it to digital life. Last week, her story on WSU researcher Robert Wielgus went live with our WebGL-powered explainer embedded in the middle.

Here’s how we brought the story of the Profanity Peak wolf pack to our readers.

The hills are alive

The first step was to figure out where our story took place. We had a post-action report from the Washington Department of Fish and Wildlife that listed details on each cow attacked, including the latitude and longitude of the attack. A separate presentation included a heatmap of wolf activity from their GPS collars, which gave us some idea of their territory.

An incident report from WDFW’s post-action document.

Based on that information, I pulled the digital elevation maps from the University of Washington’s Earth and Space Sciences department. To capture the area we were interested in, I stitched together eight elevation maps using QGIS and VTBuilder. The final output of this was a JPG heightmap, where higher pixel values meant higher elevation.

Viewing the heightmap data in QGIS, with animal attack locations as dots.

But of course, having a heightmap doesn’t do any good if you don’t have an effective way to present it.

gl.drawMountains()

WebGL was first introduced back in 2011, with somewhat shaky support in Chrome and Firefox. An adaptation of the OpenGL ES 2.0 API, it lets web pages directly access the accelerated graphics hardware on a computer or smartphone, providing powerful tools for games and 3D visualizations.

Every WebGL program consists of three parts. First, there’s the JavaScript code that loads data and issues drawing commands. In the case of our Profanity Peak story, a script reads through the pixel data of the heightmap and converts it into a mesh of vertexes in 3D space, joined into triangles. Triangles are commonly used in 3D graphics because they’re the simplest possible surface shape. In addition to XYZ coordinates, each point also has a “normal vector” that represents the direction pointing away from the surface, which we can use for lighting.

Once the JavaScript program sends the mesh to the graphics card for rendering, the other two parts of WebGL kick in. These are the vertex and fragment shaders, which convert triangles into actual onscreen imagery.

The vertex shader is responsible for converting 3D points into the 2D canvas. To make that happen, we compute a combination of transformations that represent the camera’s position and orientation in space, and the vertex shader runs each set of coordinates through that transformation pipeline.

Once the mesh is placed in view, the fragment shader runs for each pixel that’s in a visible triangle to determine its color. Our fragment shader combines a number of factors to determine the landscape’s color:

  • Surface direction (the normals we computed earlier) in relation to the global lighting
  • Elevation (lower coordinates are darker and greener)
  • Fog (distant pixels are less saturated and lighter)
  • Some random noise dithering, to break up visible triangle edges

Viewing the landscape in “wireframe mode” for debugging.

Let’s say that a 255×255 landscape grid fills the viewport (possible, for some of the camera positions in our page), which is conservatively sized at 800×600. That means the vertex shader is computing the position of 65,000 vertices (combined into 130,000 triangles), and the fragment shader is running up to 480,000 times! Luckily, modern graphics processors are tuned for massively-parallel tasks, and can typically render this scene at 60 frames per second for smooth movement. However, if your computer still struggles, we added a checkbox that halves the resolution and makes it run much faster.

In previous 3D projects, we had used Three.js, a library that papers over the sharp edges in the raw WebGL API. For this section, I decided to tackle the code myself. Although the results are gratifying, and writing shaders can be a lot of fun, the boilerplate for WebGL code is opaque and unlike any other web task. If you’re thinking of building a similar scene, our code is pretty well organized, but you may also want to look at the learning repo I built while I was prototyping this project. I recommend the experience: it’s not as hard as it seems, especially if you stick with a fairly simple rendering model (as I did).

Literal points of interest

As mentioned earlier, we wanted to add two layers of information: the recorded activity area for the wolves, and various individual points of interest (animal attacks, cattle drop-off point, the wolves’ den, and the ranchers’ salt lick). We were able to get this information from public information and from Lynda’s sources, either directly as lat/long coordinates, or on maps detailed enough to place their location.

For individual locations, we simply added the coordinates to our JavaScript bundle, scaled them to match the bounds of the map, and then passed those coordinates to the renderer as “points.” Typically employed for particle systems, WebGL treats coordinates in gl.POINTS mode as billboard sprites: they always face the user, and are configurable in size. A separate shader program is used to draw semi-transparent PNG files as the texture for these sprites, thus placing our floating icons around the map.

The completed height and heat map.

To load the wolf activity layer, however, we piggy-backed on data that we were already loading. Our heightmap image for the landscape included three values per pixel (red, green, and blue components), but we were only reading from the red channel for elevation. In Photoshop, we split the channels, keeping elevation in the red component, and using the blue component to store our heatmap. Then, during mesh generation, we read out those pixels and added it to a separate data buffer for the fragment shader. Depending on the runtime configuration, a highlight color is mixed in for vertices that contain wolf activity.

Our camera system probably deserves a brief mention at this point, since it doesn’t help to place points of interest if you can’t look at them. In a serious 3D simulation, such as a big-budget video game, the camera is often based on a kind of math called quaternions. Unfortunately, higher math is not my strong point, so our camera is a cheat: we store its location and its target location, then compute a “look-at” matrix using the glMatrix library, which we also use for various coordinate transformations. If we were giving users direct control of the camera, this would probably be a recipe for failure, but it works well in our cinematic style.

Leave it on the page

All of this technical detail is interesting, but how did we choose to make use of it in the story? Originally, we had intended to treat the landscape view as a strict timeline of events. We’ve done that before, such as for the Collateral Damage reporting on Washington fire line construction. It’s a tried-and-true organizational method.

But when Lynda’s story came in, a simple tick-tock of events felt choppy compared to the article’s prose. More importantly, the article text was already presenting the sequence of events for the reader — duplicating it in a blowout graphic seemed gratuitous. Over the weekend, I experimented with moving Lynda’s text directly into the graphic, side-by-side with the landscape. To my surprise, it was a great match!

In print, the piece is slightly different, accompanied by a static map instead. As far as I’m aware, this is the first big story that we’ve run with significant differences to the article text itself depending on where it was published, but I think that’s a good thing: it opens up new avenues for innovation online, while giving us a chance to still tell the best possible story for our print readers. I hope you’ll enjoy it wherever you find it.

Roaring to life: How we built our Elwha watershed package

When Lynda Mapes, our environment reporter, came to us with the story of how the Elwha watershed had been recovering after the world’s largest dam removal, we knew we wanted to present it in a way that would make a splash. The final result (GitHub repo) drew heavily on talents from all over the newsroom, ranging from the graphics team’s hand-drawn art to Lynda’s beautiful prose. On the digital side, there were three interesting JavaScript components that deserve more explanation: the watercolor animations, scroll-triggered effects, and FLIP transitions.

You otter be in pictures

Black and white river otter sketch

Black and white base layer

Full color otter sketch

Color “paint” layer

We knew going into this project that we would want to feature the beautiful hand-drawn images of various critters in the Elwha watershed. The final technique is an adaptation of the “lead dust” effect that we used in our Loaded with Lead investigative report. Instead of a single texture that’s used to “reveal” hidden text, we split the sketches into two layers: one base image that only contained the black channel for shading, and another full-color image that gets painted on top. By using intentionally-imperfect circles as our “brushes,” and expanding these circles from randomized positions, the resulting “watercolor” effect helps add a simple splash of life to the static images, without feeling gratuitous or jarring.

Ultimately, what makes this effect possible (and performant) on a range of devices is the canvas createPattern() method, which takes an image and generates a fill pattern object. Most of the time, this is used with smaller images to add texture to a filled path, so that you don’t need to draw and clip the pattern by hand. But in this case, the pattern is the same size as the canvas itself, meaning that we can use this to copy pieces of one image into the canvas in irregular patches–perfect for our watercolor wash. (source code)

Scroll me maybe

Another key part of the presentation is that many animations trigger as they become visible, rewarding readers for scrolling down the page. This is a pretty typical part of news interactive UI these days, but it’s worth looking at how to do it without causing jank.

Our secret weapon for many scroll-triggered effects at the Seattle Times is getBoundingClientRect(), one of the many useful DOM methods that (surprisingly) originated in Internet Explorer. When called on any element, it returns an object with the coordinates for that element relative to the viewport. In other words, it tells you where the object is displayed right now, without having to walk up the DOM for offset values or check the current scroll position. This leads to easy visibility tests like this one:

var bounds = element.getBoundingClientRect();
if (bounds.top < window.innerHeight && bounds.bottom > 0) {
  //element is onscreen in some form
}

getBoundingClientRect() is fast and easy to use for a variety of purposes, and it’s well-supported everywhere. While browsers continue to work to make scrolling smoother, it’s still important to do as little work on scroll listeners as possible, especially when the page may contain many of them (as the Elwha package does). We created a unified scroll notification queue for this project to try to keep the load down. Which brings us to our last animation feature…

FLIP and forget it

I’m a big fan of Paul Lewis’ FLIP animation technique, which leverages the browser to perform smooth animations with as little JavaScript as possible. It’s worth reading the article for more details, but the acronym spells out the basic steps: get the first position, move to the last position, invert the difference through the use of transforms, and then play by removing the offsets.

Elwha feature section

One of our animated feature sections. The animals on the right swap into the main section with a smooth expand effect when clicked.

The nice thing about FLIP is that it’s not only buttery-smooth, but it also adapts well to various lower-level page changes. In this case, when a user clicks on one of the sidebar items to swap between featured animals, we actually move the entire block between two different container elements for styling purposes. But because FLIP works entirely via on-screen coordinates and transforms, none of that  matters. We only have to know the first and last positions to create the transform, and those are easy to get via our old pal getBoundingClientRect().

Although the code for FLIP isn’t hard to write, it comes up often enough (see also: EB-5 visas, teacher pay, and modern dating) that we’ve added a small module to our news app template to make it even easier. The function takes two arguments: an element that you want to animate, and a callback that’s executed between measurements. Just make whatever changes you want during the callback, and the module will figure out the difference and apply transforms to blend between the two. Feel free to take and adapt for your own purposes!