AuthorThomas Wilburn

How we adapted the LA Times map maker

When news breaks, The Seattle Times faces a common problem: we want visual elements for the story, but photographers may not be on the scene yet. For this, we’ve often turned to maps from the graphics department, but these are labor-intensive to build, even with our fantastic crew of news artists. Services to generate maps are expensive. What’s a small, penny-pinching newsroom to do?

My first attempt at solving this problem was a simple Leaflet-based app. The styling wasn’t great, and due to browser security restrictions, people still had to manually screenshot the window in order to grab an image. It was, in short, a total hack. That’s why I was thrilled to discover the work that the LA Times Data Desk has been doing with their Web Map Maker project. Using OpenStreetMap data and vector tiles from MapZen, their tool is capable of rendering directly to a canvas, and then saving that image–no screenshot required!

I forked the project pretty much right away, and started messing around with it. Eventually, an evening in Sublime turned into a substantial rewrite, with two main goals: to match Seattle Times code style, and to simplify it by taking advantage of new browser features. In this post, I’d like to walk through some of the changes that we made–not as a criticism of the LA Times’ work, which I’m still in awe of, but as a way to explore how modern JavaScript can make a real-world application simpler and easier to maintain.

async/await

The first thing I noticed about the web map maker was the downloadIMG() function, which does the actual image capture. At more than 100 lines, it’s a monster, but a necessary one: it combines canvas rendering for the base map, html2canvas to grab popups and other map elements, and a chunk of custom code to draw any SVG elements loaded from GeoJSON. Web maps are complicated!

Compounding this problem is that, like a lot of legacy JavaScript, the code is callback-heavy. downloadIMG() becomes more and more indented as it moves through various stages, which is hard to maintain. Untangling this rendering process made a lot of sense as a starting point for the refactor, and using async/await was a natural method for taming those wild callbacks.

The async and await keywords are new to ES2017, but you can use them in evergreen browsers like Chrome and Firefox now. They’re syntactic sugar for the then() method of JavaScript Promises: inside of a function marked with async, you can “pause” the function using await instead of passing a callback to then(). For example, this:

var asyncOperation = functionThatReturnsAPromise();
asyncOperation.then(function(result) {
  return doSomeWork(result);
}).then(function(nextResult) {
  return doMoreAsyncWork(nextResult);
});

can be simplified into this:

var firstResult = await functionThatReturnsAPromise();
var secondResult = await doSomeWork(firstResult);
await doMoreAsyncWork(secondResult);

At each await, the browser converts your function into a callback for you, even in the middle of an expression or a variable assignment. The resulting code no longer requires all the function boilerplate and extra indentation. It can be “read” as synchronous, even though in reality it might take significant time to execute.

Breaking downloadIMG() into several promise-based functions made it much easier to understand and debug, since skipping a step became as simple as commenting out an await-ed function call. The new code has actually gotten a little longer after being broken up into sub-tasks, but the readability is much higher now. I was also able to move it out into its own module, which strengthens the distinction between logic and UI in the app structure.

No jQuery

jQuery is great, but at the Times we have a policy of avoiding it unless we need at least three of its hard-to-shim features, like JSONP requests or event delegation. Modern browser APIs have learned a lot from jQuery, and integrated many of its features, to the point where you might not need its help.

The original map maker code used jQuery extensively, but most of the cleanup was pretty straightforward:

  • perform DOM queries with a call to document.querySelectorAll(), aliased to $
  • use classList instead of addClass() and removeClass() calls
  • addEventListener() instead of on()

A common pattern of jQuery is that it acts on all elements in a selected query, whether there’s one or more (or even zero). ES2015’s arrow functions aren’t quite that convenient, but they do provide a close analog:

// jQuery version
$(".toggle-element").addClass("enabled");

// qsa() returns an array from document.querySelectorAll
qsa(".toggle-element").forEach(el => el.classList.add("enabled"));

It’s a little more clumsy, but it saves us the cost of loading 32KB of code for jQuery, which contributes to faster page load. jQueryUI adds another 55KB, most of which is unused–the project only needs the resize functionality for the map’s drag handle. I write a small vanilla JS module to do resizing instead, dispatching a custom DOM event whenever the element’s dimensions changed so that we could continue listening in the main module for changes.

Eliminating ~90KB of code may not seem like a lot, but on lower-powered devices, that can shave ~90ms off the page load just from the cost of code parsing and module execution. It also means we no longer load the CSS required for jQueryUI (which, ironically, the original app did not use anywhere). And speaking of CSS…

Flexbox instead of Bootstrap

This is another technique that’s becoming conventional wisdom as browsers improve: you might not need Bootstrap! The Flexbox specification for CSS lets us define a page in terms of vertical or horizontal “flex containers,” and force the children of those containers to stretch or align on either axis (finally, easier vertical centering is here). The original map maker used Bootstrap for some button group styles, but otherwise didn’t really rely on it. Swapping Flexbox in for positioning, and styling those buttons manually, turned out to be a pretty easy fix.

Destructuring and for/of loops

Finally, although it’s not a big change, I did take advantage of destructuring assignment throughout the app. For example, when pulling the width and height out of the resizer’s event:

var { width, height } = event.detail;

Or even when importing modules from my async-based loader:

var [ qsa, resizer, render ] = await install.batch("qsa", "resizer", "render");

And when combined with the new value-oriented loops and iterables in ES2015, it eliminates some boilerplate from checking the current dimensions against existing size presets:

for (var [w, h] of Object.values(sizePresets)) {
  if (w == width && h == height) custom = false;
}

This kind of unpacking behavior is common in Python or PHP, so it’s a relief to finally have it in JavaScript. It reduces the assignment code you need to write, and makes it clearer which properties you’re going to use from a given object. In some cases, it can also make tight loops faster, since local variables are quicker than properties accessed through the dot operator.

Final thoughts

Internal tools like this are a great chance to play around with new browser features, because we can set stronger minimum requirements for users. However, you can also publish code that uses many of these new JavaScript toys if you’re willing to set up a transpiler like Babel in your build process. We’ve integrated transpilation as a part of our standard news app template at the Times, and it has made a lot of our code much more pleasant to read and write. I highly recommend giving it a shot.

I’d like to close by again noting how incredibly grateful I am that the LA Times decided to open-source their tools, and that LAT map guru Jon Schleuss was so gracious about sharing with us. We’re actually still working on the project styles, and hoping to use it in production soon. You can check out the repo and watch our progress here.

Unfortunately, it’s likely that I’ve refactored so much that it’ll be difficult to pull changes back into their master branch. I hope that by spelling out the changes in this post, it’ll be easier to figure out which parts are most useful, and which ones are just me showing off. Projects like this really drive home just how important open-source development can be for the news nerd community, since we’re much stronger as a community than we are as individual newsrooms.

Farewell, Audrey!

I’m proud to announce that Audrey Carlsen is headed to the New York Times to join their graphics team. I’m saddened as well, of course: Audrey has been a valuable member of the Interactives Team here at The Seattle Times for two years now. She’s responsible for many of our most popular and innovative projects, including the Seahawks personality quiz, the animated history of Ravensdale, and (last, but certainly not least) Under our skin. She also lent her reporting talents to the paper, with pieces on growing up multiracial in Seattle and the “blue wall” of Washington politics.

Like a lot of data journalists, Audrey came to the team from a non-traditional background, without formal degrees in journalism or computer science. She flirted with a career in medicine, turned her hand to science blogging for public radio, then returned to Seattle to pick up web development skills at the Ada Developer Academy bootcamp. A loss for medicine was ultimately our gain, as she hit the ground running here at the Times and never looked back.

Audrey created not only a collection of great digital journalism, but also activism for a more inclusive, community-oriented newsroom. She has been a champion for diversity in our coverage, and a thoughtful listener for concerns both inside and outside of the Times. As a member of the Asian American Journalists Association, she helped assemble statistics on newsroom diversity across the Seattle area. Instead of taking her job for granted, Audrey used that opportunity to advocate for less-attended voices. I have been, and will continue to be, inspired by her example.

But enough of the grand plaudits and somber encomoniums: Audrey has never given up hope for an office corgi. She has a big laugh you can hear clear across the room, and is aggressively unamused by regular expression syntax. She makes a hell of an ambrosia, and she always added the necessary padding to my CSS. She’ll be missed, but great things are no doubt in store from her. I hope you’ll join me in thanking Audrey for her many contributions to The Seattle Times, and wishing her luck in the Big Apple.

Play her off, Keyboard Cat.

Roaring to life: How we built our Elwha watershed package

When Lynda Mapes, our environment reporter, came to us with the story of how the Elwha watershed had been recovering after the world’s largest dam removal, we knew we wanted to present it in a way that would make a splash. The final result (GitHub repo) drew heavily on talents from all over the newsroom, ranging from the graphics team’s hand-drawn art to Lynda’s beautiful prose. On the digital side, there were three interesting JavaScript components that deserve more explanation: the watercolor animations, scroll-triggered effects, and FLIP transitions.

You otter be in pictures

Black and white river otter sketch

Black and white base layer

Full color otter sketch

Color “paint” layer

We knew going into this project that we would want to feature the beautiful hand-drawn images of various critters in the Elwha watershed. The final technique is an adaptation of the “lead dust” effect that we used in our Loaded with Lead investigative report. Instead of a single texture that’s used to “reveal” hidden text, we split the sketches into two layers: one base image that only contained the black channel for shading, and another full-color image that gets painted on top. By using intentionally-imperfect circles as our “brushes,” and expanding these circles from randomized positions, the resulting “watercolor” effect helps add a simple splash of life to the static images, without feeling gratuitous or jarring.

Ultimately, what makes this effect possible (and performant) on a range of devices is the canvas createPattern() method, which takes an image and generates a fill pattern object. Most of the time, this is used with smaller images to add texture to a filled path, so that you don’t need to draw and clip the pattern by hand. But in this case, the pattern is the same size as the canvas itself, meaning that we can use this to copy pieces of one image into the canvas in irregular patches–perfect for our watercolor wash. (source code)

Scroll me maybe

Another key part of the presentation is that many animations trigger as they become visible, rewarding readers for scrolling down the page. This is a pretty typical part of news interactive UI these days, but it’s worth looking at how to do it without causing jank.

Our secret weapon for many scroll-triggered effects at the Seattle Times is getBoundingClientRect(), one of the many useful DOM methods that (surprisingly) originated in Internet Explorer. When called on any element, it returns an object with the coordinates for that element relative to the viewport. In other words, it tells you where the object is displayed right now, without having to walk up the DOM for offset values or check the current scroll position. This leads to easy visibility tests like this one:

var bounds = element.getBoundingClientRect();
if (bounds.top < window.innerHeight && bounds.bottom > 0) {
  //element is onscreen in some form
}

getBoundingClientRect() is fast and easy to use for a variety of purposes, and it’s well-supported everywhere. While browsers continue to work to make scrolling smoother, it’s still important to do as little work on scroll listeners as possible, especially when the page may contain many of them (as the Elwha package does). We created a unified scroll notification queue for this project to try to keep the load down. Which brings us to our last animation feature…

FLIP and forget it

I’m a big fan of Paul Lewis’ FLIP animation technique, which leverages the browser to perform smooth animations with as little JavaScript as possible. It’s worth reading the article for more details, but the acronym spells out the basic steps: get the first position, move to the last position, invert the difference through the use of transforms, and then play by removing the offsets.

Elwha feature section

One of our animated feature sections. The animals on the right swap into the main section with a smooth expand effect when clicked.

The nice thing about FLIP is that it’s not only buttery-smooth, but it also adapts well to various lower-level page changes. In this case, when a user clicks on one of the sidebar items to swap between featured animals, we actually move the entire block between two different container elements for styling purposes. But because FLIP works entirely via on-screen coordinates and transforms, none of that  matters. We only have to know the first and last positions to create the transform, and those are easy to get via our old pal getBoundingClientRect().

Although the code for FLIP isn’t hard to write, it comes up often enough (see also: EB-5 visas, teacher pay, and modern dating) that we’ve added a small module to our news app template to make it even easier. The function takes two arguments: an element that you want to animate, and a callback that’s executed between measurements. Just make whatever changes you want during the callback, and the module will figure out the difference and apply transforms to blend between the two. Feel free to take and adapt for your own purposes!

18 holes in 150,000 polygons: A topographic map for the U.S. Open

This year’s U.S. Open is being held at Chambers Bay, just south of Seattle. It’s a unique course for many reasons: its fescue grass changes the way the ball bounces, its wild elevation shifts and terrain require creative approaches to each hole, and there’s only a single tree on the entire course. We wanted to let readers tour Chambers Bay right from their web browser, so we got in touch with Robert Trent Jones II, the course architect, and asked if we could use the elevation data.

Luckily, they said yes! Unluckily, they provided the elevations as a rather hefty 15MB AutoCAD file. We didn’t have an AutoCAD license, but a trial version works for 30 days, which gave us a deadline on figuring out how to convert it to something more web-friendly.

Chambers Bay as viewed in AutoCAD using the 5-foot elevation contours

Chambers Bay as viewed in AutoCAD using the 5-foot elevation contours

The second problem was that the file didn’t contain actual polygons, which we would need to run it in a browser. Instead, it was a set of unconnected contour lines. They were at the right height and looked great, but didn’t form the shapes that a computer could use. I looked into several solutions — most of which were miserable failures —and was about to try writing an AutoLISP script to extract elevations into a file when an architect friend suggested using Google’s SketchUp. Some simplification and use of the “drape” function later, we had ourselves a mesh.

The course mesh as generated in SketchUp

The course mesh as generated in SketchUp

Excelsior!

Excelsior!

(Incidentally, SketchUp starts every new project with what looks like Stan Lee standing awkwardly in the middle of an empty green plane. This isn’t related to our project at all, I just think it’s weird.)

For this project, we used the industry-standard three.js library for creating our WebGL scene. In addition to handling the camera and object positioning, three.js made it easy to put together the various props (flags, golf balls, tree) that we needed. It also has a great web-based editor that I could use to prototype object placement and try out various model formats.

Once we had the landscape loading, our next job was to texture it by applying color from an image to make it look real. Unlike most 3D models, we didn’t have coordinates at each vertex for the texture position (nor did we have a texture to use). But we did have a PDF of the AutoCAD model prior to mesh conversion. Our graphics team was also struggling with the USGA course diagrams, which didn’t include all the hills and other elevation features. Graphic designer Garland Potts and I worked out a deal: she used our CAD diagram to create a more detailed print graphic, then handed back a shaded and textured image for us to apply to the landscape.

Before and after images of our texture map

Before and after images of our texture map

I wrote a custom WebGL shader to “project” this image down onto the terrain, then added lighting and some dithering to make up for the low resolution. This approach to texturing wouldn’t be appropriate for anything with a lot of vertical surfaces, but the relatively gentle slopes of Chambers Bay are more tolerant.

snip2

Finally, over about a week, I placed the tees and holes in the landscape, then set up camera positions for each one. Every hole gets a flyover tour, which usually just consists of moving the camera up and over from tee to green. For several, however, we hand-coded camera movements to draw attention to course features, or to follow the path that the ball would have to take.

snip

Looking back at the project, there are several improvements we’d like to make. The landscape file is enormous, and could probably be culled to reduce size without removing visual detail. We also never added a free camera to the experience, and it would have been nice to have more information about the ideal path of the ball. But overall, I’m very happy with how it turned out.

More importantly, I’m really looking forward to applying this to other projects, where being able to explore something spatially is an asset to our storytelling. For example, stories about large-scale redevelopment, our local mountain ranges or distinctive machinery (see: tunnel digger Bertha) are all excellent potential 3D experiences. Now that we have more experience with the 3D workflow, it’s just another technique in our repertoire for data journalism.

May day mayhem: how we covered Seattle’s protests

Although I wasn’t here in Seattle for the 1999 globalization protests, or the 2012 vandalism, it’s hard to miss the city’s regular May Day “festivities.” About a week ago, the digital team started thinking about how to track and collect information from reporters as they followed activities in the city. A map is a natural way to collect this data, but I didn’t want to manually manage geotagging and attaching media. Was there some way to pull this information directly from social networks?

My first stop was NPM to see if there was a module for Twitter streams, which (as it happens), there was. Hooking into the streaming API gave us real-time updates with a variety of filters available, including geotagged location and text search, while using the regular REST API let us backfill for tweets we missed and look up users for more information.

We’re wary about relying on user content anyway, but it quickly became clear that most people don’t enable location services on Twitter, and even fewer use the “precise” option, which must be re-enabled for each tweet. Clearly, we couldn’t just rely on the general population to feed our map. Our new plan was to teach reporters and photographers how to tag their locations, and then follow a real-time stream of whitelisted accounts.

I quickly put together a map-and-stream prototype to demonstrate that the concept was feasible, and during our planning meeting I fired off a quick post from a temporary test account. Luckily, the process worked, and immediately a dot appeared on the map with this picture of our charismatic digital and photo staffers:

One advantage we had in quickly assembling this map was our leaflet-map component, which assembles a map from a domain-specific set of custom elements. Because it handles setting up the Leaflet instance and populating it from markup, I could output the initial state of the map straight from our HTML templates (including zoom, position, tile layers, and map markers) instead of having to write error-prone initialization code. The same JSON file that feeds that template is also uploaded to S3, and client-side code checks it every 15 seconds for updates, so readers don’t have to refresh the page. In fact, the interactive team watched the march’s progress all day, just by leaving the page open.

It was important for readers to be able to instantly see where the action currently was, and how the marches moved across the city. We took advantage of the fact that we run JavaScript on both the client and server to share the aging algorithm between the two, and then set a class on map markers with the result. Newer tweets are more saturated, and those with video got a different color to differentiate them from text and images.

If you’re interested in putting together your own Twitter map, feel free to check out the source code for the map on GitHub. It’s based on our news app template, but with an additional index.js script that connects to Twitter and re-runs the build whenever there are updates. We also use the sqlite3 module to store tweets between server restarts, which is something we haven’t done before. Integrating a database works well, but locks us to a single machine for deployments.

While I’m mostly happy with our visualization, I think there are many improvements we can make, and a project like this is a great way of learning where we need to improve the leaflet-map map element and other in-house libraries. For example, I want to be able to link tweets in the stream with dots on the map, which requires the ability to set attributes on the markers beyond class names. I’d also like to be able to filter both dots and stream by reporter and time, so that readers can “replay” events for a better sense for their movement over time.

Another limitation that we didn’t have time to work around is the problem of Twitter video, which the service keeps deeply inside of its walled garden. Although video has been embeddable on the service for most of this year, it’s still not exposed via the API (it sends down a thumbnail picture instead, and there’s no indication that it’s different from any other photo). We worked around this problem by manually tagging tweeets with #video, coloring those dots differently, and then linking the thumbnail directly to the tweet, but it’s an inelegant solution at best.

Still, I’m very pleased with how the map turned out, especially given that we put it together in only about two days — that’s coding under pressure! More importantly, it wouldn’t have been possible to do, or nearly as amazing to watch, without the fantastic work of the reporters and photographers who fed it with their coverage of the event, risking police flashbangs, teargas, and pepper spray to do so. We’re incredibly thankful to @seattlesullivan, @pgcornwell, @seaclarridge, @stevemiletich, @stimesmcarter, @katherinelong, @evanbush, @eriklacitis, @jbroomseattle, @mikelindblom, @jim_brunner, @deleon_times, @gina_cole_, @sjgtimes, @sringman, @deanrutz, @bettinahansen, @lindseywasson, @laurenfrohne, @kgcotterill, @corinnechin, @gawlowski, and @colindilt for all their help.

Wanted: News Apps/Data Summer Intern

The Seattle Times is looking for a summer intern to work with the news apps team at the intersection of computer-assisted reporting and interactive projects. We write scrapers for web sites, make visualizations and work with the newsroom to turn reporting into dynamic online experiences. Interns will partner with developers, designers and reporters on several small projects, and gain familiarity with our cutting-edge tooling and workflow process.

Just in the past few months, we’ve exposed vaccination rates for Seattle schools, invited readers to share their Super Bowl predictions and mapped the many dogs of Seattle.

Applicants should have some prior programming experience (JavaScript preferred), and should know HTML/CSS. Knowledge of Excel and/or SQL is also valuable. The internship will run for ten weeks and is a paid position. Please apply through our job board, or contact newsapps@seattletimes.com for more information.

Only occasionally dysfunctional, always fun: Join developers Audrey Carlsen and Thomas Wilburn and digital editor Katrina Barlow at The Seattle Times.

 

Tag Soup

Over at Source, the OpenNews features blog, we’ve got a piece up today on how we used custom elements in our election coverage, as well as our most recent investigative piece. Check out Tag Soup for more information and plenty of links!

Burying the lead: better journalism through iteration

“Kill your darlings” isn’t only good advice for print journalism. Developing a successful digital project requires ruthless editing, no matter how attached you may be to that perfect paragraph or clever piece of code. Our recent Loaded with lead series, on the dangerous contamination found at gun ranges throughout the country, is a perfect example. While the final design is striking, bold, and distinctly digital, we threw away a lot of work to reach that point. Today I’d like to show how we moved through various iterations of the project until we found something right for the piece.

When we first started putting together our online plans for “Loaded with lead,” we didn’t yet have a final version of the story, or a solid headline photo to serve as inspiration. I began by experimenting with a James Bond-like screen wipe, blooming out from a gun or target to reveal the headline. Even as rough prototypes, these concepts were underwhelming.


Click the background to play the animation

Once we had the photo that would be used for the story in print, I tried another approach. Using WebGL and a handmade depth map, I set up a shifting perspective effect, changing the viewpoint in response to the mouse position to let readers look down the contaminated shooting range. It was a neat trick, but it didn’t really have any relationship to the reporting, or the problem of lead contamination, so we dropped it.


Move the mouse to see the depth effect in browsers that support WebGL

At this point, art director Susan Jouflas and I started on a new concept for the design. One of the dangers of lead in gun ranges is that it’s ejected from the gun as airborne dust: from there, it’s inhaled by shooters, settles on nearby surfaces, and gets absorbed into clothing. How could we portray this pervasive contamination to readers in the browser? We spent a lot of time looking at the ways that dust is shown in film, such as the Emmy-winning title sequence for the BBC’s Great Expectations:

To produce a similar effect, I built a multi-layered particle system in WebGL. We spawned the particles from behind the headline, as though the words “loaded with lead” were themselves emitting poisonous dust. A canvas-based fallback meant that browsers without WebGL would still get a similar–if far less elaborate–display. By tweaking the balance of sizes and directions for the particles, we found ourselves with a pretty convincing simulation to place over the image. Alongside the airborne dust, we added a smear of grime that would accompany the user’s cursor (or finger, on a touchscreen), and created a treatment for the article’s pullquotes in which grime would accumulate in the corners of the quote box as the reader scrolled past.



Click to play the “grime” animation

In general, we liked the dust and grime, but it didn’t provoke the strong reaction we were hoping for, and some people who saw it were annoyed by the contrast between the white floating particles and the black accumulation. The only design element that our test readers really loved was the dirty fingerprints left on touchscreens, which gave the piece a gritty feel in keeping with the reporting.

With that in mind, I decided to try one more idea that had been kicking around in our discussions. Susan used chalk and watercolor to create a texture made of heavy, black dust, which would be swiped “onto” the screen in response to touch or cursor movements. As if in a wax-resist painting, the user’s trail of contamination would reveal the white headline text against the previously white background. Immediately, we knew we had something special. Testers loved the effect, and it had a strong visual identity that we could use throughout the story.

Lead web producer Katrina Barlow, who did much of the design and layout on this piece, ran with the concept and integrated the black texture into the pull quotes, replacing the accumulation effect. From there, it was natural to allow the quotes to respond to a user the same way that the title did, although we made sure that they started with the top line or so already revealed. The effect was strong enough that we even dropped the cursor smudge effects, although we kept the fingerprints on large touchscreens.

Throwing away those earlier prototypes was hard to do, but we wouldn’t have been nearly as happy with the final result if we hadn’t. This is a hard lesson to learn, especially for beginning developers, who are still learning their craft and are (rightly) attached to their hard-won code, but it’s ultimately just as important as tooling and experience. When building news apps, be willing to kill your darlings: you’ll be glad you did.

Introducing our news app template

Hi, I’m Thomas Wilburn, newsroom web developer here at The Seattle Times. I work with editors and reporters to tell stories on the web, ranging from data visualizations to custom news applications. One of my most important tools for putting out great projects under deadline pressure is our news app template.

Digital storytelling is not new at The Seattle Times — you only have to look to Sea Change or our election guides to see that — but it hasn’t had a consistent process for development. Some of our news apps were built in Django, some in WordPress, and others in notepad.exe, depending on the staff assigned and their mood at the time. When I joined the newsroom earlier this year, one of my goals was to create a standard platform for digital projects, generating static files for ease of maintenance and low-stress hosting.

The result is a scaffolding system built on top of Grunt and NodeJS for producing news applications with the absolute minimum of friction (editorial or technical). In addition to populating a project with boilerplate HTML, the project initialization process also sets up the following helpful features:

  • A local development server, with live reload integration
  • Lo-dash templates, with some useful built-in helper functions
  • LESS and JavaScript compilation via AMD modules
  • A “watch” server to run build tasks automatically whenever files change
  • Bower pre-configured to install client-side libraries in a standard location
  • The ability to pull data from CSV, JSON, or Google Docs
  • One-command publication to Amazon S3

In many ways, this list is similar to scaffolding solutions from other organizations, including the NPR app template and the Chicago Tribune’s Tarbell. However, being built on NodeJS, the Seattle Times template is a bit easier to set up, and runs on more diverse software (namely, Windows). As a result, it’s been easy to get our web producers working on the same stack that we use for our big projects.

Our experiences using this app scaffolding have been positive so far. Using this scaffolding, we can be up and running on a new project in minutes, and the common structure means that it’s easy to reuse components from one app to another. Fast deployment makes our development faster, and being able to pull from Google Docs makes it easier to bring in editors and reporters. If they can use a spreadsheet, they can edit our interactives. We’ve used it to power many of our online features this year, including “Where the Bidding Wars Are?” and our Oso Landslide timeline. It even runs our Seahawks fan map!

As big proponents of open-source software, our team believes this kind of slick development experience is just too cool to keep to ourselves. So we’ve made our scaffolding available on GitHub under a GPL license. There are a few Seattle Times-specific bits you’ll need to adapt if you use it for yourself, such as our ad and tracking code. But other than that, I think it could be useful for anyone building static sites — inside or outside of a newspaper. If you build something with it, we’d love to hear about it!