Deep learning has enabled applications in medical diagnosis, education, and research. However, obtaining large-scale, high-quality data remains challenging due to privacy regulations and the scarcity of rare disease data. Recent approaches focus on deep learning-based image generation models to create synthetic data, increasing its diversity and quality for medical applications. This study proposes an improved diffusion-based model for high-quality image generation across diverse domains. Inspired by the 8-channel VAE from Mefusion, we modified the VAE structure in Stable Diffusion to reduce artifacts. To address the loss of detailed representations in the Latent Diffusion model's compression process, we introduced multilevel embeddings and adapter layers. These additions improved synthetic data quality in the dermatology domain. Using the HAM10000 dataset, we generated synthetic data for seven skin disease conditions and conducted classification experiments to evaluate its utility. The classification accuracy using synthetic data alone was comparable to using original data. Training with both synthetic and original data improved accuracy from 87% to 90%. Our results confirm that synthetic data from our diffusion model is effective for dermatological training. Visual and quantitative evaluations further highlight its potential for medical applications.