Facial Animation with GANs: Enhancing Temporal Coherence in Emotion Synthesis

  • Diego Addan Gonçalves UFPR
  • Eduardo Todt UFPR

Resumo


Generative Adversarial Networks (GANs) have made significant advancements in generating high-quality facial animations, transforming static images into dynamic expressions. However, a persistent challenge in facial animation is ensuring temporal coherence-smooth and consistent transitions between frames. This issue often leads to visual artifacts and unnatural motion, hindering the realism of animated facial expressions. In this paper, we introduce a novel method to improve temporal coherence in GAN-based facial animation. By incorporating a specialized temporal consistency module and a recurrent loss function, our approach reduces abrupt transitions and enhances the fluidity of facial expression synthesis. We present both quantitative and qualitative evaluations of our method, which reduces temporal artifacts by 58% (TCM) and improves PSNR by 3.3 dB over baselines. We validate our approach across three facial expression datasets, demonstrating perceptual gains in temporal coherence, essential for robust emotion-driven character animation pipelines.
Palavras-chave: Training, Optical losses, Visualization, Pipelines, Coherence, Transformers, Trajectory, Facial animation, Gain, Videos, Facial Animation, GANs, Temporal Coherence, Deep Learning, Video Synthesis, Recurrent Loss
Publicado
30/09/2025
GONÇALVES, Diego Addan; TODT, Eduardo. Facial Animation with GANs: Enhancing Temporal Coherence in Emotion Synthesis. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 38. , 2025, Salvador/BA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 1-6.