Designing a Real-time 3D Character to Shape Future Entertainment
3D facial-model building process by BinaryVR’s Art Director, Yoon
Needless to say, character building is a vital part of the entertainment industry. When a charming Disney character talks to us, we become a fan in a second and glow with enthusiasm. The true value of BinaryVR’s technology also shines when a cute puppy who is wearing a bunny hoodie acts like one of us. The puppy, Merry, allows people to imagine how it would be if their favorite character becomes alive with our own facial expressions. We believe that people highly praised our expression-tracking solution at that moment of imagination.
So, today, we are going to learn about Merry’s creator, Yoon, and how he designed and modeled Merry. This article contains character concepting, real-time mesh building, texturing, expression rigging, Unity integration, and the evaluation process. After reading this, you will fall in love with Merry — just like all of us!
Meet Our Art Director, Yoon
Before we snap into it, we introduce Yoon — BinaryVR’s art director with extensive experience in game and movie production. He boasts 13 years of real-time animation technology and modeling, texturing, and look-development experience from working in Lucas Film’s R&D team as a senior artist and in ILMxLAB as a lead artist. We are so proud that we are working with Yoon, who still enjoys challenging and developing his potential as an artist!
Due to his specialty in real-time technology and modeling, he participated in lots of collaboration projects with other companies, such as Magic Leap, Google, Epic Games, Oculus, etc. Among them, his latest projects were hyper-reality VR contents with The VOID. Here are some pieces of his latest projects:
Star Wars™: Secrets of the Empire was the 2018 VR Award winner, which Yoon was in charge of, developing all characters (K2SO, Darth Vader, and Stormtrooper), all exterior buildings, and the lava-field environment.
In comparison to his previous focus on realistic Star Wars character development, “Ralph Breaks VR” was a new opportunity for Yoon to dive into Disney-style 3D modeling! How would it be not appealing to animate Ralph in VR as a real-time asset? Since that, he became interested in designing different art-style characters and realized how important facial expressions are. These interests have continued and backed the decision to join BinaryVR. Now, he develops different styles of 3D characters that are rigged with facial expressions.
Creating Merry, A Real-time 3D Facial Model
The goal of the Merry project is to build a real-time 3D facial model for the purpose of showing the full potential of HyprFace — BinaryVR’s facial-expression tracking solution. Due to the nature of the project, the key focus areas are listed below:
Key Focus Areas
1. Friendly appearance that is approachable to any age/gender/ethnic group;
2. Natural rigging animation to show the tongue/eye-gaze tracking features; and
3. Effective asset-budget distribution for responsive real-time simulation.
We will follow Yoon’s character-building process step by step, keeping the key focus areas in mind. And, here are the main topics that we will discuss:
Concepting Merry
The main goal of concepting Merry was to set up a friendly appearance that anyone can approach. What we all agreed was that the closest animal to a human is a dog, specifically a puppy! Then, we started collecting image references using Pinterest. Collecting image references always inspires artists to come up with spectacular ideas and stories. We focused on collecting the right reference images for our project.
After, we tried to narrow down the concept to a more specific style: for Merry, the concept became Disney/Pixar-type of Pomeranian Puppy. Yes, that is how Merry became to have his name. Merry the Pomeranian! Yet, a cute puppy was not enough to attract audiences’ attention, so we had to find another trigger to spice him up. Normally, a character’s unique storyline or personality works as a trigger: but, as this was not the case for Merry, we put Merry in a bunny hoodie instead, making him more interesting and attractive. We also created five more basic expression library-concept features–joy, sadness, surprise, anger, and fear–in addition to Merry’s neutral face, to convey a visual idea in relation to facial expressions.
Creating a Real-time Mesh
When creating a real-time asset, the first thing that you need to do is discuss with engineers and figure out how much of the asset budget that you can use — for example, polygon-vertex count, texture size, animation data, and so on. The key is that you need to smartly distribute your budget, depending on the purpose of the 3D assets that you are building. Like other 3D-asset-related projects, the key challenge for us was to balance the visual quality and the total size of the model. The 3D model should be visually pleasing, while the weight is light enough to run the animation smoothly in real-time. After distributing the asset budgets, let’s begin to build a mesh in earnest!
- Sculpting a Hi-res Mesh
Think of sculpting a hi-res mesh as a digital version of creating a statue, working with clay to think like an artist. A real-time mesh needs to be consist of low polygon to run smoothly without overload, and an artist needs to be part engineer to build the mesh efficiently. Here, the two different needs conflict and restrict each other — artistic creativity and engineering capacity. So, first, we built a model in a high-resolution mesh as an artist, without considering the number of polygons: and, second, we generated a clean, efficient mesh, which we call a real-time mesh, on the top of the hi-res mesh!
We used a 3D digital-sculpting program tool called ZBrush. ZBrush gives artists so much freedom to sculpt any shape at a high speed, including organic surfaces, hard surfaces, or such textures as fur. We focused on getting the proportions of the face correct at this stage — where the eyes, nose, and mouth should be positioned, how big they should be, etc. We also painted some versions with facial expressions to estimate how the emotions will be displayed for this project.
- Generating a Real-time Mesh
Now, it’s time for the retopology process of changing the hi-res mesh into a low-polygon mesh. The purpose of the retopology process is to generate a clean, animatable, and efficiently optimized mesh for real-time animation. The easiest way is to use the automated topology tool, the ZRemesher feature, which is integrated with ZBrush. However, we chose to use the Quad Draw tool in Maya, as it allows us to creatively control the restructuring of the polygon mesh. A poly structure is fundamental for controlling how the mesh deforms the mouth, eyes, nose, cheek, and so on. Also, the tool is optimized for the real-time refining process.
UV Unwrapping
A real-time mesh is ready, so it is time to unfold it! We know it sounds somehow ironic but this is a very important step for the upcoming texturing process. Figuratively speaking, UV unwrapping is generating a planar figure of a 3D model. We unwrap the 3D model into 2D, which is called a UV map, and color it with 2D texture. Everyone who works in the 3D industry would agree with that this is not fun work, but it is an important process for texturing. Based on the UV map, we can layer different types of 2D texture accurately. These days, UVs are used to transfer many kinds of data as well — for example, skin weight, vertex position, normal, textures, and so on.
As a planar figure for a cube can be in different forms, the UV map also differs depending on how efficiently an artist can unfold, as this affects the process later on. For instance, you need to consider distributing more UV space at the focal point to get larger texture data, so that the area gets a more detailed look. In the case of Merry, the higher budget went to Merry’s eyes, nose, and mouth, while the lower budget went to his head and ears and the inside of his mouth.
Texturing
Texturing is commonly compared to coloring the object. The process determines how convincing the final look of the 3D model. When it is done appropriately, texturing can supplement the low-polygon mesh with details in layers. For example, Merry’s texture UV maps include diffuse color, specular color, roughness, normal, and ambient occlusion maps — these texture maps add details of how Merry looks.
As Yoon gained extensive experience in modeling and texturing at Lucasfilm, the process went smoothly for the Merry project. We painted Merry with a Software called Substance Paint, because Merry needs simple textures with single Udim. For those who are planning to create a realistic texture with a large number of multiple UDIMs, you may want to concern Mari — a tool commonly used in the film-production industry.
Animating Expressions
This is the highlight of our project — creating facial expressions! Facial expressions are fundamental for a character to show its personality and communicate with the audience. More importantly, we need Merry to animate facial expressions convincingly to deliver the full potential of a character experience.
Production of realistic facial animation is one of the most challenging areas in animation, because any small fault in the form and movement of expression is clearly noticeable by general audiences and causes uncanny valley. This was extra challenging for us, as Yoon was not an expert of the facial-rigging process. That is also one of the reasons why we chose to build an animal character for our first project. The fact that Merry is an animal, not a realistic human, allowed us to avoid this uncanny-valley issue. With the experience of the Merry-rigging process behind us, we are planning to build a human facial model for the next project!
For Merry, we aimed to create 33 face-blend shapes and six tongue-blend shapes to generate all facial expressions. Each blend shape’s value is combined and, together, create facial expressions. Refer to the examples below. We used ZBrush, so that we can monitor the combination shapes while refining individual blend shapes.

Importing Merry into Unity
When importing Merry into Unity engine, we decided to focus on three tasks — physics for the bunny hoodie, attractive eye materials, and HDRI lighting setup. Unity provides a variety of free/commercial assets, so we recommend searching for useful tools beforehand to shorten production time and boost efficiency.
- Physics for the Bunny Hoodie: We used Dynamic Bone to apply real-time movement to Merry’s hoodie. It is straightforward to set up and adjust the real-time simulation value of damping, elasticity, stiffness, and more.
- Attractive Eye Materials: Sometimes, the simple, traditional way works the best. For the eye materials, we made two layers of eyeball mashes — the cornea and iris. Then, we applied a wet transparency material to the cornea and one simple material with the eye color to the iris. For a realistic looking eye refraction with a single-layer mesh, we wanted to try eyeball shade with a physically-based refraction: however, let’s leave that for the future project.
- HDRI Lighting Setup: For the lighting-look development, we created a simple cube scene and set up lighting on the top, front, and side and baked this environment light into a cube map. A cube map is six square textures, and the environment light is projected onto the sides of a cube and stored as a map: this has been used in gaming industries for a long time because it is cost efficient with a real-time reflection effect. We assigned the cube map into Skybox in Unity, used it as our environment lighting, and reflected it onto Merry. We also filtered the scene with post-processing from the camera to finalize the look of Ambient Occlusion, depth of field, and color grading.
Evaluating With HyprFace SDK
Now, everything is ready to integrate HyprFace SDK into Merry’s 3D model! As we all know, Merry was developed mainly to show how facial-expression tracking can be used. So, we needed to test how natural and smooth Merry’s facial expressions were. Seated in front of the camera, we tested real-time animation closely to confirm that there were no errors, such as jitters or malfunctions. For some cases, we controlled certain blendshape values to adjust every possible expression combination there could be.
The evaluation process, of course, includes our team’s final feedback as well. One piece of feedback was that a smaller nose will make Merry cuter, and this is reflected in the final version of Merry. A quick take away: when you need to refine something in the 3D model at the final stage, you can simply create a new blendshape and adjust the neutral face. We created the smaller nose blendshape and applied to the neutral face at a value of 100%.
Once we all are happy with the result, we ship the final version!
Looking back, it was quite challenging, as only four weeks were given to this project. Still, Yoon managed to successfully complete the project and we are so proud to work with such a talented and motivative artist!
Now, he is planning to build a human character to demo it in the near future! Are you excited like us? On the next blog post, we will introduce “the comparison of blendshape-based and joint-based rigging,” with an example of the human-character-building process — so, stay tuned!
We are BinaryVR; aiming for seamless interaction between AI and people’s daily lives in the computer vision field. We develop the world’s top quality facial motion capture solutions, HyprFace and BinaryFace, keeping our core value in constant evolution.
Don’t forget to give us your 👏 !









