Implicit Neural Representation (INR) has emerged as a powerful tool for encapsulating high-dimensional data within neural network parameters, providing continuous and differentiable representations. However, INR faces challenges such as prioritizing low-frequency components over high-frequency details (known as the spectral bias problem) and slow convergence during training. To mitigate these issues, we explore the application of transfer learning for INR models. We observe that transfer learning not only accelerates model convergence but also improves learning efficiency and enhances the representation of high-frequency details. Furthermore, we find that source images exhibiting higher edge density and contrast, along with reduced homogeneity, significantly enhance the learning performance on subsequent target images. This study provides new insights into the application of transfer learning in INR models and highlights its potential to enhance image reconstruction quality.