We propose an adaptive policy transfer framework for enabling real-world bimanual manipulation using reinforcement learning. Our method introduces a simulation-to-reality (sim2real) transfer technique that combines domain randomization with real-time policy adaptation. A dual-arm robot system was trained in simulation and successfully transferred to real-world manipulation tasks involving dynamic object interactions. Experimental results demonstrate improved task success rates and stability compared to baseline methods. This work contributes a robust strategy for applying reinforcement-learned policies in complex, real-world collaborative robot systems.