주요 논문
3
*2026년 기준 최근 6년 이내 논문에 한해 Impact Factor가 표기됩니다.
1
preprint
|
green
·
인용수 0·
2026Deep Learning Based Facial Retargeting Using Local Patches
Yeonsoo Choi, Inyup Lee, Sihun Cha, Seonghyeon Kim, Sunjin Jung, Junyong Noh
arXiv (Cornell University)
In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch-based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re-enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
https://doi.org/10.48550/arxiv.2601.08429
Retargeting
Computer facial animation
Animation
Feature (linguistics)
Stylized fact
Face hallucination
Feature extraction
Facial expression
Face (sociological concept)
Computer animation
2
article
|
hybrid
·
인용수 1·
2024Deep‐Learning‐Based Facial Retargeting Using Local Patches
Yeonsoo Choi, Inyup Lee, Sihun Cha, Seonghyeon Kim, Sunjin Jung, Junyong Noh
IF 2.9 (2024)
Computer Graphics Forum
Abstract In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch‐based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re‐enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
https://doi.org/10.1111/cgf.15263
Retargeting
Computer science
Artificial intelligence
Computer vision
Computer graphics (images)
Seam carving
Deep learning
Image (mathematics)
3
article
|
hybrid
·
인용수 4·
2024Speed-Aware Audio-Driven Speech Animation using Adaptive Windows
Sunjin Jung, Yeongho Seol, Kwanggyoon Seo, Hyeonseo Na, Seonghyeon Kim, Vanessa Tan, Junyong Noh
IF 9.5 (2024)
ACM Transactions on Graphics
We present a novel method that can generate realistic speech animations of a 3D face from audio using multiple adaptive windows. In contrast to previous studies that use a fixed size audio window, our method accepts an adaptive audio window as input, reflecting the audio speaking rate to use consistent phonemic information. Our system consists of three parts. First, the speaking rate is estimated from the input audio using a neural network trained in a self-supervised manner. Second, the appropriate window size that encloses the audio features is predicted adaptively based on the estimated speaking rate. Another key element lies in the use of multiple audio windows of different sizes as input to the animation generator: a small window to concentrate on detailed information and a large window to consider broad phonemic information near the center frame. Finally, the speech animation is generated from the multiple adaptive audio windows. Our method can generate realistic speech animations from in-the-wild audios at any speaking rate, i.e., fast raps, slow songs, as well as normal speech. We demonstrate via extensive quantitative and qualitative evaluations including a user study that our method outperforms state-of-the-art approaches.
https://doi.org/10.1145/3691341
Computer science
Animation
Computer graphics (images)
Speech recognition