We propose a data augmentation technique utilizing a Diffusion-based generative deep learning model to address the issue of data scarcity in skin disease diagnosis research. Specifically, we enhanced the Stable Diffusion model, a Latent Diffusion Model (LDM), to generate high-quality synthetic images. To mitigate detail loss in existing Diffusion models, we incorporated lesion area masks and improved the encoder and decoder structures of the LDM. Multi-level embeddings were applied using a CLIP encoder-based image encoder to capture detailed representations, ranging from textures to overall shapes. Additionally, we employed pre-trained segmentation and inpainting models to generate normal skin regions and used interpolation techniques to synthesize synthetic images with gradually varying visual characteristics, while having limitations for clinical use, this approach contributes to enhanced data diversity and can be used as reference material. To validate our method, we conducted classification experiments on seven skin diseases using datasets combining synthetic and real images. The results showed improvements in classification performance, demonstrating the effectiveness of the proposed technique in addressing medical data scarcity and enhancing diagnostic accuracy.