Learning Nonlinear Soft-Tissue Dynamics for Interactive Avatars

Dan Casas, Miguel Otaduy

We present a novel method to enrich existing vertex-based human body models by adding soft-tissue dynamics. Our model learns to predict per-vertex 3D offsets, referred to as dynamic blendshapes, that reproduce nonlinear mesh deformation effects as a function of pose information. This enables the synthesis of realistic 3D mesh animations, including soft-tissue effects, using just skeletal motion. At the core of our method there is a neural network regressor trained on high-quality 4D scans from which we extract pose, shape and soft-tissue information. Our regressor uses a novel nonlinear subspace, which we build using an autoencoder, to efficiently compact soft-tissue dynamics information. Once trained, our method can be plugged to existing vertex-based skinning methods with little computational overhead (<10ms), enabling real-time nonlinear dynamics. We qualitatively and quantitatively evaluate our method, and show compelling animations with soft-tissue effects, created using publicly available motion capture datasets

Learning Nonlinear Soft-Tissue Dynamics for Interactive Avatars

(Comments are closed)