Ideally, a facial setup should allow the animator to build all those expressions that are required for the character to perform believably. In achieving this goal, it is helpful to take a look at the real world.
The human face can create a nearly unlimited number of expressions. These expressions are the result of contracting one or more muscles—or even individual parts of muscles—to certain amounts. Since there is not an unlimited number of muscles in the face, it is logical to assume that all these expressions result from a relatively limited set of basic movements.
Even if it is not imperative for a facial setup to be directly based on such a set of movements, it should be able to produce all these movements regardless of how it is implemented. In a Blend Shape or BCS setup, these basic movements can be used as target shapes.
However, before talking about how to implement this in a BCS setup, let’s first take a closer look at how exactly the complex facial movements can be “decomposited” into such basic movements.
After years of scientifically researching facial movement with the aim to find a reliable and precise way to describe appearance changes in the face caused by muscular action, Paul Ekman, Ph.D., Wallace V. Friesen, Ph.D. and Joseph C. Hager, Ph.D. have created FACS, the Facial Action Coding System.
Even though it was developed for scoring facial expressions, it required the separation and exact definition of all individual movements the face can make. The goal was to find the smallest set of movements that was still able to precisely describe even the most complex facial expressions.
The result was a set of approximately 60 so-called Action Units, about 30 of which are relevant for facial deformation (the others describe head and eye positions or whether part of the face isn’t visible etc.). Since one of the aims was to allow psychological interpretation of facial movement using FACS, these AUs are able to capture even the finest nuances.
The FACS Manual describes all AUs in detail and explains the method of interpreting complex facial expressions as a mix of AUs. Even though scoring is not of concern for creating a facial setup, it shows that all facial expressions can be created as a combination of AUs in different intensities.
If it is shown that facial expressions are just a mix of AUs, a facial setup that can produce and mix all AUs should be able to produce all facial expressions. The question is how to build such a setup. Let’s first note the basic requirements. It should be able to:
The mentioned precise control can be achieved using a Blend Shape approach, but then the last requirement, being a crucial factor, cannot easily be met. Its significance is already demonstrated by how co-occurrence of AUs is treated in FACS.
In the manual, it is an important issue that prospective scorers are trained in detecting AUs and their intensities when they occur in combination with each other. The manual says that combinations of AUs may create appearance changes that are different than the sum of appearance changes of the individual AUs. Some combinations may even produce distinctively new appearance changes.
This leads to problems when trying to create a facial setup based on FACS using a Blend Shape approach. Even though smart modelers may be able to tweak the shapes of individual AUs in order to get combinations like 1+4 to look acceptable—possibly at the cost of some expressiveness of these individual AUs—many combinations with e.g. AU23 or AU18 can barely be made to mix well with one other AU, let alone combinations such as 17+18+23, especially at higher intensities.
In a very simplified view, the BCS is just a Blend Shape system with the addition that you can control precisely how it behaves when targets co-occur, i.e. when they are combined.
But simply controlling how AUs are mixed involves a bit more than just “correcting” a bad mix. You have to ensure that each participating AU keeps its unique quality and influences all its different mixes in a distinct and consistent way. This does not mean an AU produces the same appearance changes in all its mixes, rather that the modification to its contribution should behave logical.
The algorithm that calculates how much each correction should participate in the deformation weighs modifications to an AU’s contribution in a way, so that the AU stays rather unmodified in low intensities of mixes that require modification. The further the point of 100% correction is approached, the more the correction takes over. If the modification to an AU in its various mixes is consistent, as explained above, this weighting method produces natural looking deformation, which is especially noticeable during animation.
On the next pages, you’ll find some ideas and pointers as to what you should especially take care of when setting up a face with the BCS.