This work highlights the potential of RL-MPC to achieve energy-efficient performance in real-world driving scenarios without the need for full travel information. Although dynamic programming (DP) can achieve a globally optimal solution, it requires full travel information, such as the whole velocity profile. To address this limitation, we decompose the optimal control problem into stage and terminal costs, allowing optimization without full travel information. Model predictive control (MPC) is used to solve stage cost when short-term predicted information is available. In addition, terminal costs are incorporated to penalize the state using a value function. Using similar driving patterns of the repeated routes, we approximate the value function that exhibits near optimality through reinforcement learning (RL). Compared to conventional MPC, which is based solely on short-term information, RL-MPC reduces fuel consumption by 5.53% while meeting the final state of charge (SOC) condition.