• Home
  • Download
  • License
TadaTada
  • Home
  • Download
  • License
  • Sign In
    Logout

TADA! Text to Animatable Digital Avatars

3DV, 2024

Tingting Liao1*,  Hongwei Yi2*, Yuliang Xiu2, Jiaxaing Tang3, Yangyi Huang4, Justus Thies2, Michael J. Black2

1Mohamed bin Zayed University of Artificial Intelligence 2Max Planck Institute for Intelligent Systems, Tübingen, Germany

3Peking University   4State Key Lab of CAD & CG, Zhejiang University

* denotes equal contribution


TL;DL: Text-to-3D animatable expressive avatar generation;

Abstract

We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines. Existing text-based character generation methods are limited in terms of geometry and texture quality, and cannot be realistically animated due to inconsistent align-007 ment between the geometry and the texture, particularly in the face region. To overcome these limitations, TADA leverages the synergy of a 2D diffusion model and an animatable parametric body model. Specifically, we derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map, and use hierarchical rendering with score distillation sampling (SDS) to create high-quality, detailed, holistic 3D avatars from text. To ensure alignment between the geometry and texture, we render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process. We further introduce various expression parameters to deform the generated character during training, ensuring that the semantics of our generated character remain consistent with the original SMPL-X model, resulting in an animatable character. Comprehensive evaluations demonstrate that TADA significantly surpasses existing approaches on both qualitative and quantitative measures. TADA enables creation of large-scale digital character assets that are ready for animation and rendering, while also being easily editable through natural language. The code will be public for research purposes.

Illustration Video



Results Gallery

Iconic Characters (53 examples)


Customized Characters (18 examples)


Head Characters (7 examples)


Paper


Code

Code is released !!!

Our generated mesh examples can be downloaded from this webpage, and the animation code is released !!!


Acknowledgement & Disclosure

Acknowledgement. Thanks Zhen Liu and Weiyang Liu for their fruitful discussion, Haofan Wang and Xu Tang for their technical support, and Benjamin Pelkofer for IT support. Hongwei Yi is supported in part by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B. Yuliang Xiu is funded by the European Union’s Horizon 2020 research and innovation programme under the Marie SkÅ‚odowska-Curie grant agreement No.860768 (CLIPE). Jiaxiang Tang is supported by National Natural Science Foundation of China (Grant Nos: 61632003, 61375022, 61403005). Yangyi Huang is supported by the National Nature Science Foundation of China(Grant Nos: 62273302, 62036009, 61936006).
Disclosure. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.

Citation


@article{liao2023tada,
title={TADA! Text to Animatable Digital Avatars},
author={Liao, Tingting and Yi, Hongwei and Xiu, Yuliang and Tang, Jiaxiang and Huang, Yangyi and Thies, Justus and Black, Michael J},
journal={ArXiv},
month={Aug}, 
year={2023} 
}

Contact

For questions, please contact tada@tue.mpg.de

For commercial licensing, please contact ps-licensing@tue.mpg.de

 
© 2023 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2023 Max-Planck-Gesellschaft