How They Did It: "Hunger in Los Angeles"

By on February 15, 2012

City noise fills the street as you wait in line for a small box of food. An argument breaks out over the last of the food distributions. A man collapses right in front of you, and you stand there suspended in time. You can’t do a thing to help.

Created by Nonny de la Peña, “Hunger in Los Angeles” is a 3D retelling of a scene outside of a Los Angeles food bank housed in a church. It’s the kind of scene we read about every day in newspapers, but the virtual audio and visual put you right into the world, seemingly while it is happening.

USC Annenberg describes the project:

De la Peña’s six-and-a-half minute interactive news piece allows one user at a time to enter a virtual reality, gaming-style environment set in the midst of a food bank distribution line located outside the First Unitarian Church, on 8th Street in Los Angeles.

During the eventful – and non-fiction – segment, chaos occurs when someone tries to steal food. A man in line falls to the ground in a diabetic coma. An ambulance and two paramedics arrive to assist.

The piece, like de la Pena’s other works and writing, explores the difference between the objective and the subjective, and re-imagines modes of creating and delivering contemporary journalism.

“Hunger” was crafted in part by using video game development and 3-D platform software, a head-mounted display and an audio recording made in 2009 by a student intern as part of the USC Annenberg journalism professor Sandy Tolan-led effort, “Hunger in the Golden State.”

Entering the experience is slightly disorienting; after slipping a backpack full of wires over your shoulders, your ears are encased in headphones and your eyes are covered by a visor. Then the simulation starts, as you stand in line with a group of digital people.

As a gamer, the first instinct I have in a virtual environment is to try to run, which was impossible given the constraints of the space. The second instinct was to look for some kind of objective, which is not the point of a simulation. The virtual reality part can be strange — there are in-world objects like curbs, cars and walls that do not provide the feeling one would expect. I also found myself trying to game the simulation, trying to separate my eyes from the visor, so I could have a better idea of where I was in the real world.

It’s clearly engaging. During the demonstration, someone nearly crashed into a wall. (I nearly did the same thing, despite the handler that follows you with the gear.) At points during the simulation, I crawled, put my hands through walls, knelt down to look at people on the ground and attempted to jump through walls. When the simulation ended, the action paused and an infographic lit up the screen.

After taking off the gear (and reorienting myself to reality, which took about two minutes), I interviewed de la Peña about the project, her team and what the future of journalism could feel like.


How did you come up with this project?

I started [my career in journalism] as correspondent for Newsweek, then I left that to do a documentary film. I always loved technology, but felt like I couldn’t do robust narrative issues with the platforms that existed. So I worked on a film called “Unconstitutional,” about post-9/11 civil rights abuses. Then I got a grant to create a virtual version of Guantanamo Bay prison. After we made that, I can remember the moment in my back yard where I thought “Holy shit, this is applicable to all journalism.” So, then I started thinking about spatial narratives, and what you call the “embodied edit” — there’s so much research about people and their connection to their virtual avatar. So how do you create a space where people have agency but are still acting within the space of the narrative?

[This is what we did in the Guantanamo project], you were placed in the body of the detainee. After planning out the project, we recorded audio and got actors to read the [Mohammad Al-] Qhatani [interrogation] transcripts, and in that project, you just were in it. [Participants] would hear [the interrogation] through the wall, and you could see yourself in a mirror and people reported feeling as if they were in that kneeling, tied position.

I was a research fellow at the University of Southern California J-School, so there was a class being run by Sandy Tollen, about Hunger in the Golden State, [and the question was] how do you put people in this [experience]? You can’t make people feel hungry, but you can put them in the scene, so what’s the moment when the food runs out at the food bank. That’s what I thought I would do, but people kind of went silent at that moment.

I knew that from previous

Alone, a project from Anchorage Daily News
Finding the Middle Ground, a project from Arizona State University
Location-Based VR Data Visualization, a project from Arizona State University, Cronkite School of Journalism
Hack the Mold, a project from CUNY Graduate School of Journalism
Interdimensional Audio Editor, a project from Datavized Technologies
Structured Stories NYC, a project from Duke University
Spot the Surveillance, a project from Electronic Frontier Foundation
Sea Level Rise South Florida: How are Waters Affecting You?, a project from Florida International University
Hear Here, a project from Future Projects Media
The Georgia News Lab, a project from Georgia Collaborative
Volum (Volumetric Video in the Field), a project from Gisa Group
The News Oasis, a project from Howard University
HU Insight, a project from Howard University
Marginalized Youth Voices Amplified on Virtual Worlds, a project from Kennesaw State University
The Next Louisville, a project from Louisville Public Media
Milwaukee Poverty Database, a project from Marquette University
Immersive Storytelling from the Ocean Floor, a project from MIT Future Ocean Lab
Nathan Griffiths, a project from Nathan Griffiths
Aftermath VR App, a project from New Cave Media
DACAmentation: Journalism by Dreamers for Dreamers, a project from Northeastern Illinois University
Scene VR, a project from Northwestern University Knight Lab
Rural Remote Reporting Lab , a project from Ohio University
AR for Field Production, a project from PBS NewsHour
What’s in the Air? Investigating Air Quality in San Diego, a project from San Diego State University
Newspoints, a project from San Francisco State University
Audience Acquisition via Smart Speakers, a project from San Jose State University
TexasMusicViz: Music, Data and Storytelling, a project from Texas State University
The Wall, a project from The Arizona Republic and USA TODAY Network
Dataverses: Information Visualization into VR Storytelling, a project from The Outliers Collective
Facing Bias, a project from The Washington Post
Home with Our Stories, a project from UCLA Design Media Arts
AI-generated Anonymity in VR Journalism, a project from University of British Columbia
Virtual Water Infrastructure Crisis, a project from University of Georgia
Intersections, a project from University of Illinois
Datasmart Lawrence, a project from University of Kansas
Frozen Out: A Community Data Journalism Project, a project from University of Maryland
The Face of a New Cuban Diaspora, a project from University of Miami
Oxford Stories Student News Wire Service, a project from University of Mississippi
Noticiero Móvil, a project from University of Nevada, Reno
New Mexico News Port, a project from University of New Mexico
AI Journalist, a project from University of North Carolina at Chapel Hill
Talk with Us: Poverty in Oklahoma City Neighborhoods, a project from University of Oklahoma
Don't Wait for the Quake, a project from University of Oregon
Making High-End VR Accessible: Immersive Storytelling for Local Communities, a project from University of Southern California, Annenberg
Our Border Life: Engaging Community Across Borders Through Media, a project from University of Texas at El Paso
The Confluence: A Live News Experiment Covering Wisconsin Waters, a project from University of Wisconsin-Madison
Voxhop, a project from Virtual Collaboration Research Inc.
Improved Newsgathering in Traumatized Communities, a project from Washington State University
Stream Lab, a project from West Virginia University
that if the audio isn’t great, you cannot trick the mind. My intern had just recorded this whole scene (at the church), so I said “that’s what we’re going to build.” That was a year and a half ago, then I joined a working group. The first prototype kinda worked, but after I kept asking Bradley Newman, the Lead Artist and Programmer on the project how to code things he got involved and became my lead artist on it. As we built it, we also started thinking about how to have a linear narrative. We decided it needed to have some kind of content at the end, so we came up with this visual idea and found a way to do it that was kind of cool.

People have whipped out their cellphones while in the simulation, and have tried to comfort people. Kids are funny, as they will look at [the virtual] adults to figure out what to do.

Can you walk me through the technology?

We started with a zoom microphone recording the scenes on the street. We didn’t do a lot of spatial sound redevelopment. [The visuals] were built in Unity 3D. I did the first version with some javascript; Bradley took over in C#. The cool thing about building in Unity, you can spit it out on the Kinect camera — you don’t even need the Xbox, you can view it on your laptop. The Kinect experience [for the Hunger project] is a bit tough because we don’t have the gestures right yet.

What were the challenges with building this project?

We had to make some hard choices, editorially. That guy [who has a seizure] — he actually does get revived, and manages to leave. But we didn’t have the money and time to code all that so it became an editing decision. I didn’t realize how much that would impact people, not knowing how the man’s story would resolve. A woman had a personal experience like that with someone with diabetes, and she came out of the experience crying.

Also, the usual problems with space, time and finding enough interns.

How much did all of this cost to put together?

I spent about $700 of my own money. It’s just the components, the motion sensor system. Unity 3D is open source. My guess is you could do this, depending on skill level and team, for about that.

Why do you think immersive experiences are important to the future of journalism?

I have no doubt that this is the future of journalism. We need to establish best practices. The power of this stuff is so huge, we need to be thinking about what our responsibility is as journalists, what decisions are we making. I certainly learned a huge lesson here.