This study introduces an adaptive motion planning algorithm designed for multi-joint robots operating in unstructured environments. We integrate reinforcement learning with predictive environmental modeling to enable robots to dynamically re-plan trajectories in response to unexpected obstacles and terrain variations. Our results show that the proposed planner achieves significant improvements in success rate and energy efficiency over traditional sampling-based methods. The planner’s generalization to unseen configurations and its real-time execution potential make it a viable candidate for use in disaster response, exploration, and industrial manipulation tasks.