Over the past few years, Generative Adversarial Network(GAN) has experienced significant growth in various applications as a generative model. However, the stability issues in training remain a challenge in GANs. To mitigate these problems, this paper proposes a novel GAN model that applies dual-parallelized generators. This study designs new methodologies by inputting three sets of data to the discriminator and updating the average of the loss values. Experimental results show that the proposed model shows an ideal convergence graph and reduces the loss by about 40%. The results also show an improvement in the quality of the generated data, with the model achieving stability during the training process.