Whenever Lynda Mapes comes to the visuals department with a story idea, we sit up and take notice. So when she dropped by a few months back to talk about a story she was working on, about the conflict between wolves and cattle ranchers in Eastern Washington, we immediately started thinking about how to bring it to digital life. Last week, her story on WSU researcher Robert Wielgus went live with our WebGL-powered explainer embedded in the middle.

Here’s how we brought the story of the Profanity Peak wolf pack to our readers.

The hills are alive

The first step was to figure out where our story took place. We had a post-action report from the Washington Department of Fish and Wildlife that listed details on each cow attacked, including the latitude and longitude of the attack. A separate presentation included a heatmap of wolf activity from their GPS collars, which gave us some idea of their territory.

An incident report from WDFW’s post-action document.

Based on that information, I pulled the digital elevation maps from the University of Washington’s Earth and Space Sciences department. To capture the area we were interested in, I stitched together eight elevation maps using QGIS and VTBuilder. The final output of this was a JPG heightmap, where higher pixel values meant higher elevation.

Viewing the heightmap data in QGIS, with animal attack locations as dots.

But of course, having a heightmap doesn’t do any good if you don’t have an effective way to present it.

gl.drawMountains()

WebGL was first introduced back in 2011, with somewhat shaky support in Chrome and Firefox. An adaptation of the OpenGL ES 2.0 API, it lets web pages directly access the accelerated graphics hardware on a computer or smartphone, providing powerful tools for games and 3D visualizations.

Every WebGL program consists of three parts. First, there’s the JavaScript code that loads data and issues drawing commands. In the case of our Profanity Peak story, a script reads through the pixel data of the heightmap and converts it into a mesh of vertexes in 3D space, joined into triangles. Triangles are commonly used in 3D graphics because they’re the simplest possible surface shape. In addition to XYZ coordinates, each point also has a “normal vector” that represents the direction pointing away from the surface, which we can use for lighting.

Once the JavaScript program sends the mesh to the graphics card for rendering, the other two parts of WebGL kick in. These are the vertex and fragment shaders, which convert triangles into actual onscreen imagery.

The vertex shader is responsible for converting 3D points into the 2D canvas. To make that happen, we compute a combination of transformations that represent the camera’s position and orientation in space, and the vertex shader runs each set of coordinates through that transformation pipeline.

Once the mesh is placed in view, the fragment shader runs for each pixel that’s in a visible triangle to determine its color. Our fragment shader combines a number of factors to determine the landscape’s color:

  • Surface direction (the normals we computed earlier) in relation to the global lighting
  • Elevation (lower coordinates are darker and greener)
  • Fog (distant pixels are less saturated and lighter)
  • Some random noise dithering, to break up visible triangle edges

Viewing the landscape in “wireframe mode” for debugging.

Let’s say that a 255×255 landscape grid fills the viewport (possible, for some of the camera positions in our page), which is conservatively sized at 800×600. That means the vertex shader is computing the position of 65,000 vertices (combined into 130,000 triangles), and the fragment shader is running up to 480,000 times! Luckily, modern graphics processors are tuned for massively-parallel tasks, and can typically render this scene at 60 frames per second for smooth movement. However, if your computer still struggles, we added a checkbox that halves the resolution and makes it run much faster.

In previous 3D projects, we had used Three.js, a library that papers over the sharp edges in the raw WebGL API. For this section, I decided to tackle the code myself. Although the results are gratifying, and writing shaders can be a lot of fun, the boilerplate for WebGL code is opaque and unlike any other web task. If you’re thinking of building a similar scene, our code is pretty well organized, but you may also want to look at the learning repo I built while I was prototyping this project. I recommend the experience: it’s not as hard as it seems, especially if you stick with a fairly simple rendering model (as I did).

Literal points of interest

As mentioned earlier, we wanted to add two layers of information: the recorded activity area for the wolves, and various individual points of interest (animal attacks, cattle drop-off point, the wolves’ den, and the ranchers’ salt lick). We were able to get this information from public information and from Lynda’s sources, either directly as lat/long coordinates, or on maps detailed enough to place their location.

For individual locations, we simply added the coordinates to our JavaScript bundle, scaled them to match the bounds of the map, and then passed those coordinates to the renderer as “points.” Typically employed for particle systems, WebGL treats coordinates in gl.POINTS mode as billboard sprites: they always face the user, and are configurable in size. A separate shader program is used to draw semi-transparent PNG files as the texture for these sprites, thus placing our floating icons around the map.

The completed height and heat map.

To load the wolf activity layer, however, we piggy-backed on data that we were already loading. Our heightmap image for the landscape included three values per pixel (red, green, and blue components), but we were only reading from the red channel for elevation. In Photoshop, we split the channels, keeping elevation in the red component, and using the blue component to store our heatmap. Then, during mesh generation, we read out those pixels and added it to a separate data buffer for the fragment shader. Depending on the runtime configuration, a highlight color is mixed in for vertices that contain wolf activity.

Our camera system probably deserves a brief mention at this point, since it doesn’t help to place points of interest if you can’t look at them. In a serious 3D simulation, such as a big-budget video game, the camera is often based on a kind of math called quaternions. Unfortunately, higher math is not my strong point, so our camera is a cheat: we store its location and its target location, then compute a “look-at” matrix using the glMatrix library, which we also use for various coordinate transformations. If we were giving users direct control of the camera, this would probably be a recipe for failure, but it works well in our cinematic style.

Leave it on the page

All of this technical detail is interesting, but how did we choose to make use of it in the story? Originally, we had intended to treat the landscape view as a strict timeline of events. We’ve done that before, such as for the Collateral Damage reporting on Washington fire line construction. It’s a tried-and-true organizational method.

But when Lynda’s story came in, a simple tick-tock of events felt choppy compared to the article’s prose. More importantly, the article text was already presenting the sequence of events for the reader — duplicating it in a blowout graphic seemed gratuitous. Over the weekend, I experimented with moving Lynda’s text directly into the graphic, side-by-side with the landscape. To my surprise, it was a great match!

In print, the piece is slightly different, accompanied by a static map instead. As far as I’m aware, this is the first big story that we’ve run with significant differences to the article text itself depending on where it was published, but I think that’s a good thing: it opens up new avenues for innovation online, while giving us a chance to still tell the best possible story for our print readers. I hope you’ll enjoy it wherever you find it.