Generating optimal trajectories in dynamic environments is crucial for advanced autonomous driving. Analyzing multi-level processes individually can obscure the interdependencies between levels, resulting in suboptimal trajectories. Furthermore, short-term planning often fails to anticipate dynamic road conditions, thereby limiting hazard identification. This leads to steering or speed control errors due to high computational demands, which ultimately compromise smooth driving. To address these challenges, this study proposes a robust framework that integrates multi-level modules to generate optimal trajectories and to execute predicted state-based long-term planning. In particular, we employ Hierarchical Reinforcement Learning (HRL): the upper level makes high-level driving decisions, and the generated trajectory serves as an objective function for the lower-level motion planner, which is executed by a low-level controller. Additionally, the framework incorporates dynamic state prediction of surrounding vehicles, enabling long-term planning based on predicted state vectors. To evaluate the proposed framework, various scenarios were simulated using the CARLA autonomous driving simulator. Results show that the framework significantly outperforms baseline models in trajectory smoothness, computational efficiency, hazard avoidance, adaptability, and learning performance. These improvements demonstrate its effectiveness in dynamic multi-lane environments for autonomous driving.