기본 정보
연구 분야
프로젝트
논문
구성원
article|
gold
·인용수 6
·2025
AI-driven music composition: Melody generation using Recurrent Neural Networks and Variational Autoencoders
Han Zhao, Sung‐Wook Min, Jianwei Fang, Shanshan Bian
Alexandria Engineering Journal
초록

Automatic melody generation has recently gained significant attention in music creation and artificial intelligence. However, existing models often lack accuracy in emotional expression, coherence, and diversity. To address these issues, we propose a melody generation model based on Recurrent Neural Networks (RNN) and Variational Autoencoders (VAE), integrating emotional consistency loss and generative adversarial loss. This approach enhances melody diversity via VAE and captures long- and short-term dependencies using RNNs for better structural coherence. Emotional consistency loss helps maintain target emotions during generation, while generative adversarial loss improves naturalness and fluency. Experimental results show that our model outperforms traditional models like Music Transformer, MuseNet, and DeepBach in fluency, creativity, emotional expression, and harmony. The generated melodies are more expressive and innovative, providing a new method and perspective in melody generation, improving emotional expression and diversity, and laying a foundation for advancing automatic music creation technology.

키워드
Composition (language)Artificial neural networkRecurrent neural networkComputer scienceArtificial intelligenceSpeech recognitionArtLiterature
타입
article
IF / 인용수
- / 6
게재 연도
2025

주식회사 디써클

대표 장재우,이윤구서울특별시 강남구 역삼로 169, 명우빌딩 2층 (TIPS타운 S2)대표 전화 0507-1312-6417이메일 info@rndcircle.io사업자등록번호 458-87-03380호스팅제공자 구글 클라우드 플랫폼(GCP)

© 2026 RnDcircle. All Rights Reserved.