기본 정보
연구 분야
프로젝트
발행물
구성원
article|
gold
·인용수 1
·2025
Enhancing Implicit Neural Representations With Transfer Learning
E. W. M. Lee, Min Soo Kim, Chaoning Zhang, Sung‐Ho Bae
IF 3.6IEEE Access
초록

Implicit Neural Representation (INR) has emerged as a powerful tool for encapsulating high-dimensional data within neural network parameters, providing continuous and differentiable representations. However, INR faces challenges such as prioritizing low-frequency components over high-frequency details (known as the spectral bias problem) and slow convergence during training. To mitigate these issues, we explore the application of transfer learning for INR models. We observe that transfer learning not only accelerates model convergence but also improves learning efficiency and enhances the representation of high-frequency details. Furthermore, we find that source images exhibiting higher edge density and contrast, along with reduced homogeneity, significantly enhance the learning performance on subsequent target images. This study provides new insights into the application of transfer learning in INR models and highlights its potential to enhance image reconstruction quality.

키워드
Computer scienceTransfer of learningArtificial intelligenceArtificial neural network
타입
article
IF / 인용수
3.6 / 1
게재 연도
2025