기본 정보
연구 분야
프로젝트
발행물
구성원
article|
인용수 0
·2025
Predicted State-Based Hierarchical Reinforcement Learning for Long-Term Decision Making in Urban Dynamic Scenarios
Seongmin Heo, Jeong hwan Jeon
초록

Generating optimal trajectories in dynamic environments is crucial for advanced autonomous driving. Analyzing multi-level processes individually can obscure the interdependencies between levels, resulting in suboptimal trajectories. Furthermore, short-term planning often fails to anticipate dynamic road conditions, thereby limiting hazard identification. This leads to steering or speed control errors due to high computational demands, which ultimately compromise smooth driving. To address these challenges, this study proposes a robust framework that integrates multi-level modules to generate optimal trajectories and to execute predicted state-based long-term planning. In particular, we employ Hierarchical Reinforcement Learning (HRL): the upper level makes high-level driving decisions, and the generated trajectory serves as an objective function for the lower-level motion planner, which is executed by a low-level controller. Additionally, the framework incorporates dynamic state prediction of surrounding vehicles, enabling long-term planning based on predicted state vectors. To evaluate the proposed framework, various scenarios were simulated using the CARLA autonomous driving simulator. Results show that the framework significantly outperforms baseline models in trajectory smoothness, computational efficiency, hazard avoidance, adaptability, and learning performance. These improvements demonstrate its effectiveness in dynamic multi-lane environments for autonomous driving.

키워드
Reinforcement learningTerm (time)Computer scienceState (computer science)Artificial intelligenceMachine learningAlgorithm
타입
article
IF / 인용수
- / 0
게재 연도
2025