TalkingPose: Efficient Face and Gesture Animation with Feedback-guided Diffusion Model

Alireza Javanmardi1, Pragati Jaiswal1,2, Tewodros Amberbir Habtegebrial2, Christen Millerdurai1, Shaoxiang Wang1,2, Alain Pagani1, Didier Stricker1,2
1German Research Center for Artificial Intelligence (DFKI) 2RPTU Kaiserslautern-Landau

TLDR: Diffusion-based human face and gesture animation, with GPU memory requirements comparable to image diffusion models.

Abstract

Recent advancements in diffusion models have significantly improved the realism and generalizability of character-driven animation, enabling the synthesis of high-quality motion from just a single RGB image and a set of driving poses. Nevertheless, generating temporally coherent long-form content remains challenging. Existing approaches are constrained by computational and memory limitations, as they are typically trained on short video segments, thus performing effectively only over limited frame lengths and hindering their potential for extended coherent generation. To address these constraints, we propose TalkingPose, a novel diffusion-based framework specifically designed for producing long-form, temporally consistent human upper-body animations. TalkingPose leverages driving frames to precisely capture expressive facial and hand movements, transferring these seamlessly to a target actor through a stable diffusion backbone. To ensure continuous motion and enhance temporal coherence, we introduce a feedback-driven mechanism built upon image-based diffusion models. Notably, this mechanism does not incur additional computational costs or require secondary training stages, enabling the generation of animations with unlimited duration. Additionally, we introduce a comprehensive, large-scale dataset to serve as a new benchmark for human upper-body animation.

Model Pipeline

TalkingPose model pipeline; Human Animation models with GPU memory equal image diffusion model
TalkingPose Pipeline. Training: Appearance Encoder (CLIP + ReferenceNet) extracts source appearance features, while the Motion Encoder provides motion cues to the U-Net. Inference: A single RGB source image and a driving-pose condition are used within DDIM sampling to predict a latent representation, which is refined through a feedback loop with proportional gain (β).

Effectiveness of Closed-loop Control Mechanism

TJE and PSNR vs Model Complexity

Parameter analysis for tje metric
PSNR parameter analysis

BibTeX

@article{javanmardi2025talkingpose,
  author    = {Javanmardi, Alireza and Jaiswal, Pragati and Habtegebrial, Tewodros Amberbir
               and Millerdurai, Christen and Wang, Shaoxiang and Pagani, Alain and Stricker, Didier},
  title     = {TalkingPose: Efficient Face and Gesture Animation with Feedback-guided Diffusion Model},
  journal   = {To appear},
  year      = {2025},
}

Acknowledgements

This work has been partially funded by the EU projects CORTEX2 (GA: Nr 101070192) and LUMINOUS (GA: Nr 101135724).