Purpose: Out-of-Distribution(OOD) detection plays a crucial role in semiconductor manufacturing for identifying defects and ensuring quality control. However, collecting diverse real-world defect samples is expensive and challenging, limiting model performance. This study improves OOD detection by using Deep Convolutional Generative Adversarial Networks(DCGAN) to generate synthetic defect images, which are incorporated into the Outlier Exposure(OE) framework as an auxiliary dataset to enhance model generalization.Methods: Using the WM 811K wafer dataset, we generate synthetic defect images via DCGAN and integrate them into the OE framework. The proposed approach is compared with Maximum Softmax Probability(MSP) and standard OE model to evaluate its effectiveness. Performance is measured using FPR95, AUROC, and AUPR. Additionally, we investigate the impact of DCGAN training epochs on image quality and detection performance.Results: Experimental results indicate that OE with DCGAN significantly outperforms baseline method. Incorporating DCGAN-generated data reduces FPR95 while increasing AUROC and AUPR, demonstrating improved OOD detection capabilities. The study also highlights how optimizing DCGAN training enhances synthetic data quality and overall model performance.Conclusion: This study confirms that DCGAN-generated defect images effectively mitigate data scarcity issues in semiconductor OOD detection. Future research should explore alternative generative models (e.g., StyleGAN, VQ-VAE) and address class imbalance challenges to further enhance robustness and reliability.