Author Archives: David Mullich
The Magic Circle
The “magic circle” is a term coined by Dutch historian Johan Huizinga, author of the book Homo Ludens: A Study of the Play-Element. Huizinga also coined the term “play theory”.
The magic circle is an area, either physical or conceptual, set aside for play: the tennis court, the stage, the movie screen. (Huizinga made no distinction between play and ritual, so the magic circle is also a consecrated spot for ritual, such as the courthouse, the classroom, or the temple).
This magic circle is a temporary world, set apart from the “real” world, where acts take on special meanings and participants agree to take on certain roles. In the magic circle of a soccer game, for example, the act of kicking a ball takes on the meaning of scoring a goal.
In the real world, where we work and toil, we experience questioning, responsibilities, uncertainties, and fears. However, in the magic circle of play and ritual, we experience dreams, immersion, creativity, challenge, and catharsis. As we leave the magic circle, those experiences transform into meanings. This is one of concepts that lead Huizinga to conclude that play may be the primary formative element of human culture, and that “man is only completely a man when he plays.”
When we play a game — whether it is a sports game, board game, or video game — we enter or form a magic circle that is separate in time and space from work. Even spectators of the game form their own magic circle.
What magic circles do you participate in, and what part do they play in your own formative growth?
Motion Capture of Facial Expressions for Game Animations
Someone who had just seen the impressive computer-generated apes in the recently released film Dawn of the Planet of the Apes asked me why it is so difficult for video game developers to simulate realistic looking mouth and facial movements. Even in games with the most advanced and sophisticated graphics, the one unrealistic aspect that usually stands out for is mouth and facial movements.
The problem is that human facial expressions and speech involve many, often subtle use of our facial muscles. Even as babies our attention is drawn to other people’s faces, and as we grow up we learn to detect the most nuanced facial movements.
In games, we do not have the luxury of recreating such subtleties. This wasn’t a problem in the early days of computer games, when resolutions were low, graphics were blocky, and colors were few — we expected the animations to be as crude as the graphics.
As game technology advanced, so did the capabilities for represent characters in a game. When we produced Vampire: The Masquerade – Bloodlines at Activision, we licensed the graphics engine Valve created for Half-Life 2. We also licensed software for our developer, Troika, to use that would interpret the audio files used for voice-over sequences, and select the correct mouth animations to play while the voice-over played. We also had control over which facial expressions to play, and overall, it was very cool.
However, technology used in films for representing digital characters had far outpaced that which we were using on games. At the same time we work developing Vampire Bloodlines, Peter Jackson was producing his Lord of the Rings films, which featured Gollum as a fully rendered digital character. As time went on the technology for digital characters improved, and now actor’s faces can be motion captured for extremely realistic and nuanced facial expressions. As a consequence, the audiences experience with and expectations for digital characters has grown more sophisticated, and the limitations of such characters in video games have a much harder time meeting such a high threshold for a willing suspension of disbelief.
Why is it hard to match what is now happening in the film world with digital characters? The reasons mainly have to do with the expense. To capture factual expressions, you need special motion capture equipment and the software to parse the motion capture data down into coherent information for driving the computer animations. Then, rendering the character requires many computer servers running for many hours to create a single animation frame.
Now, film resolution is higher than the resolution of video games, and that’s both a benefit as well as a problem. It’s a benefit in that the render time for a video game frame doesn’t take as much time, but it doesn’t look quite as realistic either.
There are other problems with video games trying to match films. A film’s running time may be 2-3 hours, but a gameplay involves many hours of playing time — and that’s just playing the game straight through along one path. All of the different permutations of gameplay may require a massive amount of animations to be created.
Also, there’s the interactive nature of video games. Animations in filmed are “canned” or pre-recorded. That is, it may take many hours to render a single animation frame, but each frame is recorded to play back later in the film at 24 frames per second.
In a game, animations for cut scenes may be pre-recorded, but not during gameplay in which the animations are based on the player’s actions in real-time, especially when the player can observe characters as s/he moves about in a 3D environment. Those character animations are not pre-recorded, but have to be generated in real-time, or on the fly, as the player plays the game. Therefore, the animations are rendered quickly on the player’s computer at a game frame rate of up to 60 frames per second, rather than each individual frame being rendered across hundreds of computers in a render farm across an entire day. So, therefore it’s impossible to get real-time animation that has all the sophistication and subtlety that’s in a motion picture.


