Quadrupedal robots hold promising potential for applications in navigating cluttered environments with resilience akin to their animal counterparts. However, their floating-base configuration makes them susceptible to real-world uncertainties, presenting substantial challenges in locomotion control. Deep reinforcement learning has emerged as a viable alternative for developing robust locomotion controllers. However, approaches relying solely on proprioception often sacrifice collision-free locomotion, as they require front-foot contact to detect stairs and adapt the gait. Meanwhile, incorporating exteroception necessitates a precisely modeled map observed by exteroceptive sensors over time. This work proposes a novel method for fusing proprioception and exteroception through a resilient multi-modal reinforcement learning framework. The proposed method yields a controller demonstrating agile locomotion on a quadrupedal robot across diverse real-world courses, including rough terrains, steep slopes, and high-rise stairs, while maintaining robustness in out-of-distribution situations.