MDT-A2G : Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation

ACM MM 2024

1Fudan University, 2Tencent, 3Zhejiang University, 4Vivo
Teaser Image

Comparison between DSG+ and our MDT-A2G-B with respect to training steps/times on a single A100 GPU. Compared to DSG+, MDT-A2G-B exhibits a faster training convergence speed and superior performance, demonstrating the effectiveness of proposed method.

Abstract

Recent advancements in the field of Diffusion Transformers have substantially improved the generation of high-quality 2D images, 3D videos, and 3D shapes. However, the effectiveness of the Transformer architecture in the domain of co-speech gesture generation remains relatively unexplored, as prior methodologies have predominantly employed the Convolutional Neural Network (CNNs) or simple a few transformer layers. In an attempt to bridge this research gap, we introduce a novel Masked Diffusion Transformer for co-speech gesture generation, referred to as MDT-A2G, which directly implements the denoising process on gesture sequences. To enhance the contextual reasoning capability of temporally aligned speech-driven gestures, we incorporate a novel Masked Diffusion Transformer. This model employs a mask modeling scheme specifically designed to strengthen temporal relation learning among sequence gestures, thereby expediting the learning process and leading to coherent and realistic motions. Apart from audio, Our MDT-A2G model also integrates multi-modal information, encompassing text, emotion, and identity. Furthermore, we propose an efficient inference strategy that diminishes the denoising computation by leveraging previously calculated results, thereby achieving a speedup with negligible performance degradation. Experimental results demonstrate that MDT-A2G excels in gesture generation, boasting a learning speed that is over 6× faster than traditional diffusion transformers and an inference speed that is 5.7× than the standard diffusion model.

Framework

Framework Image

Demo

BibTeX

@article{MDT-A2G,
  author    = {Xiaofeng Mao, Zhengkai Jiang, Qilin Wang, Chencan Fu, Jiangning Zhang, Jiafu Wu, Yabiao Wang, Chengjie Wang, Mingnin Chi},
  title     = {MDT-A2G: Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation},
  journal   = {ACM MM},
  year      = {2024},
}