Animation, 3D Rigging, Mixed Reality and more Unity

In recent weeks, I have been working a lot to augment my MR project for my research through modifying the animation tracks present in it. I will record many of my conclusions, progress and understanding in this page.

Firstly, I was briefly exposed to the vast field of Computer Animation. I was in possession of a Humanoid Avatar, which was rigged and animated according to some motion capture data. I first had to understand some of the mechanics of Unity’s animation system, which was recently changed relative to the past. In Unity, to animate a GameObject, one must first attach an Animator component to it. The Animator component contains an Animator Controller. This Controller is a whole different beast altogether and can be wired up using the special Animator window. In short, it contains multiple Layers, which one can animate and each layer contains a State Machine which ties into how something is animated. Each state contains an Animation Clip, which can either be composed in Unity’s animation window or be some external thing.

Animation Clips can be produced through the animation window through pose rigging. One simply rigs a couple of poses in succession to one another and then Unity interpolates through them to produce the clip. Of course, there are a lot to see and do at this stage. One can control the interpolation curves to make the animation smoother or more cartoonish. And of course one can control the speed of the animation, make it loop etc… All those elements of animation in Unity are accessible through code in C#, where I found that pausing / continuing an animation is possible simply by tweeking its speed. This is also required in other cases too, like when most animation clips are too fast.

As a final part to experiencing Unity’s animation environment, I also played around with the Timeline, a tool that allows the user to play animations and activate / deactivate objects in succession. This allowed me to build quite a cool little application in MR and would have made things extremely easy if I was aware of its existence early on, because in the beginning I would devote hours to synchronizing different Animators manually, while this tool does it better and automatically.

In terms of MR, I practiced a lot linking the controllers to code and giving the user control over their Mixed Reality experience. I implemented an application in which the user can rotate, shrink and transport humanoid avatars with their controllers while they are performing some animation clip. The user could also pause / continue that animation.

For the humanoid avatars, I spent quite a bit of time thinking and experimenting with different models. I first chose a very rudimentary model from the Unity Asset Store and then upgraded it to the “Man In A Suit” model. Lastly, I started using UMA 2 (Unity Multipurpose Avatar), which is very customizable and well-rigged but was kind of buggy and gave me trouble more than once. I realized that using the Bone Builder tool is the way to go to achieve integration of UMA in the Timeline. Important fact: UMA AVATARS NEED A FLOOR underneath them or they will fall to infinity because they are by default rigidBody objects (I just realized this).

And speaking of floors, I also spent a lot of time thinking about how to make this floor invisible, because I certainly didn’t want it to show up in my MR application. The solution I used in the end was something called a stencil shader. A stencil shader is a type of shader that is used to hide parts of a mesh for being shown in the screen. I will compile my efforts to understand more about shaders in another post.

Lastly, I also had to synthesize one my own animation clips of a rigged hands model. Producing this kind of animation is something that may be doable through Unity but I used Blender to do it. Again, I had to rig each pose individually and it was quite time consuming, but I got to see how Blender works and a bit of its mechanics.