Future of news graphics includes 3D, VR, motion capture
The New York Times’ graphics team includes people with distinct skills, from 3D modeling and cartography to statistics and data analysis.
But they all operate as journalists in the newsroom, producing interactive and print graphics, motion capture and virtual reality to extend storytelling to new platforms.
Graham Roberts, a senior graphics editor at the newspaper, explained how the team networks, bringing their skills together and collaborating with other Times staffers, during a talk April 27 at the Whittenberger Auditorium. The talk was part of the IU Institute for Digital Arts and Humanities’ Rewiring Consent workshop and speaker series.
Robert said the newspaper had been innovating graphics for several years, but the 2012 Snow Fall project, which chronicled a ski group’s journey that ended in a fatal avalanche, was one of the first to integrate immersive visualizations and text. The package earned a Pulitzer Prize for reporter John Branch, and the technology raised the bar on interactive graphics.
Roberts explained that, for this story and for all stories, the team wanted to incorporate innovations that were “digitally native” to the New York Times.
“We wanted to maintain the print article while also doing innovative things visually. We sought to integrate our visualizations with the written word for a single immersive experience.”
So, the department developed a new means of doing this, known as the “interrupter.” Despite its name, this method is meant to do the exact opposite. It seamlessly integrates the media and the story for a single reading experience, where scrolling is the only action readers need to take. The technique is common today, Graham said, but this was a new approach at the time.
Since the success of Snow Fall, Roberts and his graphics team have sought to innovate data visualization in other stories, particularly when it comes to music. One of their biggest successes was a story about the 2015 summer hit Where Are U Now by Justin Bieber, Diplo and Skrillex. “Bieber, Diplo and Skrillex make a hit,” an eight minute video, takes interviews from all three men explaining the song in detail and overlays a visual representation of what is happening in the song. The music is notated not with regular music notes, but with shapes and lines of various colors and sizes, so that everyone can understand what is happening musically.
Roberts said one of the most important things about this piece was that it was accessible and watchable on any device, ranging from smartphone screen to large desktop.
“The desktop is fading,” said Roberts. “Most of what we see is coming through our mobile devices. We usually spend all this money formatting things to fit the desktop screen and treat the mobile screen as secondary. But with this piece, we changed our patterns.”
The New York Times has been experimenting with motion capture, a technology that records movement digitally for the purpose of creating an animated figure. Actors wearing special suits that have reflectors on vital joints and limbs move in front of special cameras that record this data and then create an animated figure that perfectly mimics these movements.
The New York Times put Julliard music director Alan Gilbert into one of these suits and recorded him as he conducted a group of musicians to track every subtle move he made.
“Here we are connecting music and gesture to interpret the mystified world of classical conducting,” Roberts said. “This is a unique and engaging representation of conducting that made the invisible, visible.
On top of all of these innovations, though, the medium Roberts and his team are most excited for is the nascent platform of virtual reality. Roberts first was exposed to virtual reality at SIGGRAPH 2014, a conference that looks at innovations in computer graphics and interactive techniques.
“I found myself asking, ‘what could this mean?’ and it seemed like a natural progression for The New York Times,” he said. “It felt like it could be important for journalism.
So, the newspaper developed an app called NYTVR to upload VR videos. The Times teamed up with Google and mailed out 1 million cardboard VR headsets to their Sunday subscribers so people could watch the videos properly.
Since then, The New York Times has taken its viewers to the most remote corners of the earth in the piece “The Displaced” and to the surface of Pluto in “Seeking Pluto’s Frigid Heart.”
Soon, Roberts announced, the Times will take its audience to the Ross Ice Shelf in Antarctica, where viewers may explore the dry valleys (which is the closest place, in terms of climate, to Mars on Earth), tour the McMurdo Station that keeps everyone in Antarctica alive, view the impact of climate change up close and swim beneath the ice in the clearest water on earth.
“It’s amazing because we get to define what virtual reality can be. That doesn’t happen often,” said Roberts. “There’s a matter of transparency. In regular video, the editor makes the decision of where to point the camera. In virtual reality, you give that back to the viewer, and it takes framing away.”
That project is just one of many projects Roberts and his team are working on in their quest to displaying data and information in a new and immersive way. However, Roberts believes that these data visualizations, even virtual reality, are just the tip of the iceberg to what is really going to change the world of storytelling. The industry is quickly evolving, and The New York Times is trying to stay on the cusp of it.
“You can easily argue that more has changed at The New York Times in the last decade than in its entire 160 years,” he said. “And each advancement seems to be additive.”
More: