Blog Archives

Motion Capture of Facial Expressions for Game Animations

Andy Serkis in motion capture rigSomeone who had just seen the impressive computer-generated apes in the recently released film Dawn of the Planet of the Apes asked me why it is so difficult for video game developers to simulate realistic looking mouth and facial movements. Even in games with the most advanced and sophisticated graphics, the one unrealistic aspect that usually stands out for is mouth and facial movements.

The problem is that human facial expressions and speech involve many, often subtle use of our facial muscles. Even as babies our attention is drawn to other people’s faces, and as we grow up we learn to detect the most nuanced facial movements.

In games, we do not have the luxury of recreating such subtleties. This wasn’t a problem in the early days of computer games, when resolutions were low, graphics were blocky, and colors were few — we expected the animations to be as crude as the graphics.

As game technology advanced, so did the capabilities for represent characters in a game. When we produced Vampire: The Masquerade – Bloodlines at Activision, we licensed the graphics engine Valve created for Half-Life 2. We also licensed software for our developer, Troika, to use that would interpret the audio files used for voice-over sequences, and select the correct mouth animations to play while the voice-over played. We also had control over which facial expressions to play, and overall, it was very cool.

However, technology used in films for representing digital characters had far outpaced that which we were using on games. At the same time we work developing Vampire Bloodlines, Peter Jackson was producing his Lord of the Rings films, which featured Gollum as a fully rendered digital character. As time went on the technology for digital characters improved, and now actor’s faces can be motion captured for extremely realistic and nuanced facial expressions. As a consequence, the audiences experience with and expectations for digital characters has grown more sophisticated, and the limitations of such characters in video games have a much harder time meeting such a high threshold for a willing suspension of disbelief.

Why is it hard to match what is now happening in the film world with digital characters? The reasons mainly have to do with the expense. To capture factual expressions, you need special motion capture equipment and the software to parse the motion capture data down into coherent information for driving the computer animations. Then, rendering the character requires many computer servers running for many hours to create a single animation frame.

Now, film resolution is higher than the resolution of video games, and that’s both a benefit as well as a problem. It’s a benefit in that the render time for a video game frame doesn’t take as much time, but it doesn’t look quite as realistic either.

There are other problems with video games trying to match films. A film’s running time may be 2-3 hours, but a gameplay involves many hours of playing time — and that’s just playing the game straight through along one path. All of the different permutations of gameplay may require a massive amount of animations to be created.

Also, there’s the interactive nature of video games. Animations in filmed are “canned” or pre-recorded. That is, it may take many hours to render a single animation frame, but each frame is recorded to play back later in the film at 24 frames per second.

In a game, animations for cut scenes may be pre-recorded, but not during gameplay in which the animations are based on the player’s actions in real-time, especially when the player can observe characters as s/he moves about in a 3D environment. Those character animations are not pre-recorded, but have to be generated in real-time, or on the fly, as the player plays the game. Therefore, the animations are rendered quickly on the player’s computer at a game frame rate of up to 60 frames per second, rather than each individual frame being rendered across hundreds of computers in a render farm across an entire day. So, therefore it’s impossible to get real-time animation that has all the sophistication and subtlety that’s in a motion picture.

 

 

How Video Game Voice-Overs Are Produced

David Mullich at voice-over recording session.Periodically I receive unsolicited resumes from people who are looking for work in video games. Curiously, very few of the resumes I receive are from game designers, programmers, or artists. Most are from people working in the audio side of the business — music composers and voice-over actors.

Now, I have contracted a number of voice actors to work on my projects. However, what I typically do is provide the sound engineer I’ve hired with a list of characters, along with short descriptions and sample lines. My engineer usually has a pool of actors they’ve worked with before. Or, if the sound engineer I’m using doesn’t have a regular pool of talent, I’ll contact an agent who specializes in voice-over actors.

My contact will then send me audio files of several voice actors reading for each role, and I’ll decide which actor I prefer for that role. Some of the more famous voice-over actors I’ve been lucky enough to work with are John DiMaggio (who also voices Bender from Futurama) and Phil LaMarr (who voices the John Stewart Green Lantern). Well-known voice-over actors can be a bit expensive and usually only play big roles. However, there are quite a few not-so-well-known but still excellent voice-over actors in Hollywood, who do work for scale and can voice up to three characters in a single project by using different vocal inflections.

When I worked as a game producer at Walt Disney Computer Games, I worked with Disney’s internal voice-over department, and they, of course, have a large pool of voice-over actors they typically use, including a number of “official voices” for Disney’s most well-known characters. When I produced an Arachnophobia game, they recommended I use Wayne Allwine (who was then the official voice for Mickey Mouse) as the voice for the John Goodman character, and he was just great to work with. For DuckTales: The Quest For Gold, I worked with Wayne’s wife Russi Taylor, who provided the voices for Huey, Dewey and Louie. I also worked with Terence McGovern, who voiced Launchpad McQuack. However, my favorite recording session was for Vampire: The Masquerade — Bloodlines, where my son Ben and I provided voices for some of the commercials that played on the radio in the game.

To prepare for the voice-over session, I would prepare scripts that only contained the lines of the actors who were being recorded, with each line being numbered for easy reference. I would also include notes about the emotional tone of various lines, as well as a description of the personality of character for the actor to reference.

We would record each actor individually, even if they were playing characters who engaged in dialogs with other characters. The not only made the logistics of setting up recording sessions much easier, but it also minimized the time we spent renting the recording studio. Unless actors were playing very big roles, each one might voice as many as three separate characters, using different vocal inflections, of course.

I would always join in the recording sessions for any of my games — the only exception was recording Terence McGovern for DuckTales because he was located in San Francisco whereas I and my Disney voice-over producer were located in Burbank. So we listened in to the recording session in San Francisco via conference call. Usually only my voice-over director would speak directly with the actor during the recording session; I would be on-hand to answer questions and provide context for each line. For some small projects, I would have the actor record three versions of each line so that I could choose which version I liked best later; but for big projects with thousands of lines or roles with hundreds of lines of dialog, I would just have the sound engineer save as line reading that I was happy with, and then move on to the next.

After the recording session, my sound engineer would edit the session by cutting out the poor readings or other recording mistakes, as well as adding reverb or other needed audio effects. When he was done, he would provide me the recordings as separate WAV files for each line, all named to match the number scheme from my scripts.

Now, if you are interested in finding gigs as a voice-over actor, you need to make yourself known to voice-over agents and engineers who maintain a pool of talent to draw from. I would send them a recording of the various voices that you can do. And when a gig comes up that they think you’re right for, they will ask you to do an audition for it.