All too often the cinematic side of games are overlooked by the majority of players. In many cases, especially in todays day and age, more effort is placed into these short hyper-detailed cinematographs than any scene of similar length from most full length movies. An exceptionally executed short clip often suffers due to its length. Being so short, it can be easily overlooked, or skipped without much second thought.
What I'd like to help get across in this article is how much collaboration and skill it takes to create a truly fantastic cinematic. To fabricate such a believable, and detailed world through a computer takes an insane amount of time and precision. It could take thousands of man hours to create a three minute clip as seen in The Black Soulstone. I'll do my best to convoy the process as talked about at this panel. Also for the sake of not turning this article into a picture book, I'm going to link relative picture to text, so click the links to check out the pics.
Before the presentation had even really begun, we found out that The Black Soulstone clip (which is not the full clip due to spoilers) is only 3 out of 27 minutes of cinematic cutscenes that will be in Diablo III. I don't know about you guys, but 27 minutes of cutscenes like this is in itself enough for me to get a bag of popcorn and watch one by one.
The most important part of a cinematic is how well it tells a story. This process starts off as storyboards, a collective effort from the team to sketch out in minimal detail scenes they want to link together forming the cinematic. A directer and their crew sits down and forms the storyboard, with close collaboration of the relative crews that will help work on it. This process covers the entire production from start to finish. Be it visual, musical, or character progression; everything has to be planned in the storyboard.
An example given was how Leah is afraid of Azmodan. How she expresses this fear has to be planned in the storyboard. This one small choice can make a huge impact on Leahs' character progression. If she screams and freaks out, this says something completely different about her than if she just flinches and shies away. Camera angles and focal points also help convoy character emotion. Again, this is where it all begins.
Color has a huge effect on how a scene is perceived, this fact is supported by mentioning there is an entire crew devoted to just color scripting. Diablo with all of its earthy hues is particularly sensitive to this color directing. It is so important in fact that the color translation from early rendering to final production is very often spot on the same.
The development of a character in Diablo III pivots around three main concepts, game relevance, suiting to the story, and available technology. Azmodan has gone through many iterations, from a mix of Angel and Demon, through a kind of warrior look, and finally to the crab/sumo wrestler you see today. This final concept was chosen because it represents all of the seven sins in which Azmodan, the Lord Of Sin encompasses. For example the shape of his body suggests gluttony and greed, while his body decorations suggest pride and vanity.
After a characters concept is complete, it must be brought to life. Through a massive effort from many teams, a character is modeled in 3D, textured & colored, rigged for ease of animation, animated, and tweaked for who knows how long before finally making the directors standard. To state that this is an over-simplification is a understatement.
At this level of professionalism, not everything takes as long as you may think though. For example most of the demon horde you seen in the cinematic was conceptualized and preliminarily modeled in just two days. Life is in the details though, and these details are what take the most time to perfect in these creature models. After initial modeling, the item is passed onto the next team which specializes in texturing. Using the initial 2D concept art, the texture team works on translating that 2D texture into 3D, and accurately spreading it around the creature. They aim to perfectly match the 2D concept, since that is indeed where the director chose to finalize the creature.
Beyond the character models themselves, there are tons of surrounding particle effects that add just as much "character" to the character as the traditional traits like skin, voice, and attitude do. For example, the smokey effect from the Lich Kings eyes adds a lot of supernatural meaning to him. Another example is the smoke rolling off the demons in the Black Soulstone cinematic. These effects have to be subtle, for if they draw too much attention to themselves it distracts the viewer from much more important things happening on screen.
Reference From Real Life
When you're aiming to make something believable, what better way than to study related things in real life? This is exactly what the cinematic teams do throughout their work. This is done through setting up target textures, lighting effects, skin tones/textures, etc and photographing them in real life. Then after capturing their target images, the team recreates them over the computer. This is painstakingly done brush stroke by brush stroke using various software.
The results of this can be truly amazing
As with perfecting textures and lighting effects, to perfect a believable rendered human first you must study one. To drive the development of Leah, the teams studied how things appear in real life when under similar conditions to what they wanted to reproduce in the cinematic. In general they setup a photographic environment like the one at the beginning of the cinematic, complete with a stone, a girl (their producer), candles, and great lighting. The photograph was then manipulated to closer match what the director wanted to see as the final result in the cinematic.
Continuing with their study, they took close-up shots of various eyes and facial expressions to better understand how to accurately reproduce a face, which is arguably the hardest thing to do when speaking of realistic art. In Blizzard fashion, they went so much further than just simple photos. They setup a specialized camera rig to capture light exposure, color saturation, and texture mapping of various faces, which they can directly use while creating the character model. From early rendering to near completion takes hundreds and hundreds of tweaks to everything from lighting, to textures, shaders, and many other factors even with the assistance of real life examples.
As with lighting and texturing, the animation team began their study via photographing relative real life objects. They first took hundreds of photos of different facial expressions to identify how the different facial muscles moved during each expression. Their goal in this was to replicate every muscle in the face into their character model so they would be able to perfectly reproduce different emotions through the character, they reached this goal, and the results speak for themselves.
They did the same thing for eye expressions. When observed closely, the human eyes and eyelids have tiny micro twitches, which we don't even notice until they're not there. When viewing older character models up close in cinematics, something looks off. You can't always place your finger on it, but something tips your brain off that this isn't real, and that negatively effects how that character conveys emotions. It's through these tiny movements that the character comes to life, and suddenly all of their emotions become so much more believable. This can be seen as Leah falls asleep into her dream during the cinematic.
This trend of study also made its way into hand and writing observations. Little things like how a pen indents the paper as you write, or how certain small muscles contract during tiny movements. Through the close collaboration of the animation, rigging, and modeling teams they eventually achieved their goals of a believable character, interacting with a believable world.
Features of a character which constantly change drastically are known as Dynamic Systems. Take hair, and clothes for example. With their success in real life studies, similar simulation techniques were used to perfect hair animation. They tracked down a coworker with similar hair to Leah, found a fan, and went to work replicating movements Leah had to do in the cinematic, with the added effect of wind.
After they had all the info and observations they needed, they moved onto modeling, and animation. Hair poses a problem in that there are hundreds of thousands of strands of air, which are near impossible to compute or individually animate. So instead of dealing with each strand one by one, they start off with very large chunks, maybe five in total. Then they'll break those chunks up to say 200. Those smaller number of strands are what actually move individually. On top of those large strands they add in hair models and effects in order to make it look like there are hundreds of thousands of strands moving.
Another thing the Dynamic Systems Team covers are rigs. Essentially rigs are basic models of the character that are broken down into separate parts and placed on different pivot points to enable the animation team to make the characters move. Think of a rig like an action figure. This process in itself is precision work, since if a model isn't rigged correctly, it will be impossible to creature realistic movements in the end character.
What's interesting is that the movements don't always have to make perfect sense, as long as you're not looking at the entire model. In the scene to the bottom left Azmodan is completely hunched over, which looks reasonable from that camera angle. Now if you zoom out and look at the entire rig to the bottom center, you now see he is actually broken, which says that his movement was impossible speaking realistically.
To make those believable movements, first the team had to observe similar creatures in real life. Sad fact though, nothing on Earth is exactly like Azmodan. So instead, they choice to observe crabs for lower body movements, and sumo wrestlers for his upper body movements. Through excellent creativity they managed to merge these two and were left with a very believable rig.
Their next production challenge was to get Azmodan to believably speak without any lips. In order to produce sounds such as "P" and "B" you need to have a way to block air flow for a second. It just so happens Azmodan has no lips, so instead they used very pronounced movements of his mandibles and tongue to create relative actions to syllables. The end result comes across great, and adds to his creepy factor during the close-up shots.
The Hordes Of Hell
During the end of the cinematic there are tons of creatures marching around, each seemingly doing their own things. In order to populate the entire screen with great looking creature animations, they needed to use some smoke and mirrors. The creatures in the far back are using very simple animations, which don't have those little detailed movements we see in more important close-up shots. The more important, and unique animations are called "Hero Animations". As the link shows, this classification of animation is not bound to living creatures. Another trick they used in order to create the illusion of every creature having uniqueness, is to individually tweak how they hold their weapons. By tweaking this they create a great silhouette across the entire army, where no two spears are tilted in the exact same direction.
The Right Stuff
Not everything makes it into the final cut, in fact most of the original concepts don't. On of these that they talked about was Azmodans lava drool. They tested all different kind of viscosities until they found one that worked, they added lighting and texture, and even went as far as you add it on the finial model. Even through all this, the end result didn't come across right. They felt it made Azmo feel sloppy, and even comical, two things that are not part of Azmodans characteristics; so they cut it completely. Some effects that did make the final cut for Azmodan include awesome things like active lighting effects from his mouth and eyes along side a heat distortion filter.
Lighting effects can make or break everything in a cinematic. During the panel Blizzard mentioned how they adopted the same cutting edge effects major motion picture companies are using. These effects make for great lighting across the entire world, but in particular, faces. No one looks good under a harsh light, it blows out every little detail to an undesirable degree. Through combining the crisp detail of a hard light, and the blended effects of a soft light, they achieved a great shot which looks both crisp and delicate.
A character has many different light sources effecting them at once. The example they showed was of Leah, which is under five completely different light sources when she faces Azmodan in her dream. When combining all of these, they can go back and tweak each one until they are happy with the result. The second type of lighting passes they spoke about are called render elements. These are essentially just the various layers of each shot. Each other these layers have their own effects which add to the shot, but don't effect one another directly.
Bringing Down The Walls
Moving onto the scene where Azmodan brings down the rock wall, revealing his plan to evade Sanctuary, how this was achieved is pretty amazing. Aside from all the lighting effects which had to be changed as the wall comes down, how they broke the wall itself is pretty interesting. They used a shaping technique called Voronoi. Essentially, they place random dots across a plane, draw lines equal distances from these dots, then uses the lines to create organic shapes which look very natural.
It is also worth mentioning that the falling rocks didn't go through any type of physics simulator. The director had a very specific visual in mind, and so every single one of the 40,000+ rocks you see were animated individually as the fell and interacted with each other.
Well that about wraps it up, I hope you enjoyed our journey through the making of the Black Soulstone Cinematic, and learned a little bit about the absurd amount of effort that goes into creating these windows into Sanctuary.
For the full experience be sure to watch the panel!
Wrong. They used a method of space segmentation (tessellation) named after Georgy Voronoi. They did not mention some specific software.
Thanks for clarifying for us all, that's a HUGE mistake. I can see how that warranted a post dedicated to it. Very informative.
When I read that, I though "they're telling a particular software they used? Strange. Gotta google it later".
So it's actually a method named Voronoi. Makes much more sense now. Also, thanks for the link!
Man this is like post game stuff. They just wanna suck all the mystique out of the game by telling you about every little thing they do and all their processes don't they? That or they like, did you like our cinematic? Now look at how technical we are as we explain to you how we did it. ??
I love the part of the cinematic when the wall falls, and I was glad to hear about how they created that. It's insane that each falling piece was animated individually, and the finished product of that looks amazing.
Also I think we can conclude that Blizzard is obsessed with old timy candles in their cinematics.
Anyway, great read and great points about storyboards. A lot of people might not realize (likely because in general Hollywood doesn't care as much anymore) how much thought goes into why a camera is in its position and why a character reacts a certain way. Lot of movies these days go for the big shock or over-reaction, but it's great to see someone mentioning higher filming techniques.
Wow this is truly amazing! This is what I'm learning; modeling, animating and the like. Blizzard is actually the work that got me hooked on to it. I would love to work for them though relocating isn't an option and I'm only learning this at the local college so my skills need a lot of fine tuning, though my texturing and lighting suck hard. After 'just' drawing and painting on 2d paper/canvases all my life, I love making a 3D interact-able model in the 3D programs and eventually animating them and bringing them to life. Though I have yet to reach the skill of creating an entire horde in 2 days and it can sometimes be tedious, I love it.
It's this that captures me the most. Thanks.
Besides anyone know what program they used? Maya? Max?
Rollback Post to RevisionRollBack
I agree with feminism. I don't think that it's right that for every dollar a man makes, a women gets 70 cents. Why do I only get 30 cents and some chick gets the rest?
thanks for taking the heat on the "i uploaded this" part ^^
@ NoxioussGaz - They use Maya i recognized the rigging style wich is what i use in maya.
They probably use max for the smoke simulation considering they have a few great plugins there.
zbrush as u saw.
they use mental ray as it looked like for the lightning, im not totaly sure on that since they can use after effects.
therefore i think they use maya mostly, and from what ive seen in their job post requirements they have alot of maya in there.
im not sure what they do with the post work. theres so many but im thinking they use industry standard things.
Id recommend Cg Society if you wish to learn more about techniques , and industry standard stuff.