MOAC is coming...

How does MOAC AI work

I

MOAC reads the input video and creates a customized facial of the target character on the video.

II

The MOAC then calculates and derives blendshapes from the video and generates 3D animation based on the video.

III

Using a customer workflow, a camera tracking solution is created to put the character’s face in proper position and rotation.

IV

Finally MOAC spits out an Arkit-compatible blendshapes-based animation FBX file.

What Makes MOAC Unique?

baby-boss-matcap

Face Tracking

The Arkit-compatible blenshapes-based animation FBX output file gives

MOAC gives you tracking output in terms of facial model and animation in terms of Arkit-compatible blendshapes


Picture1

Animatable FBX Output

The Arkit-compatible blendshapes-based animation FBX file gives full control to artists in post process

lipsync

Various Facial Effects

With the animation FBX file, potential applications include beauty shots, dramatic de-aging, alterations in facial structure, face replacement, enhanced facial performance, digital lipsync, and driving animation for 3D characters like UE’s Metahuman, etc. 

What we provide:

 

  • Animatable 3D FBX file with an accurate personalized facial model
  • ARKit-compatible blendshape-based animation. 
  • Standalone application and plugins for most major DCCs

Sign-Up below and be the first to try MOAC AI!

We'd love to hear from you. Leave us any comment or feedback. Or join the MOAC Team!
If you are a professional at a VFX facility, please leave you work email and a comment so that we can give your request priority.
Please leave any comments, questions or feedback you have about the software, output, our team or anything else. We'd love to hear from you.