Automatic melody generation has recently gained significant attention in music creation and artificial intelligence. However, existing models often lack accuracy in emotional expression, coherence, and diversity. To address these issues, we propose a melody generation model based on Recurrent Neural Networks (RNN) and Variational Autoencoders (VAE), integrating emotional consistency loss and generative adversarial loss. This approach enhances melody diversity via VAE and captures long- and short-term dependencies using RNNs for better structural coherence. Emotional consistency loss helps maintain target emotions during generation, while generative adversarial loss improves naturalness and fluency. Experimental results show that our model outperforms traditional models like Music Transformer, MuseNet, and DeepBach in fluency, creativity, emotional expression, and harmony. The generated melodies are more expressive and innovative, providing a new method and perspective in melody generation, improving emotional expression and diversity, and laying a foundation for advancing automatic music creation technology.