Reliable navigation of autonomous vessels critically depends on robust situational awareness, particularly object detection. For this, an accurate, 360-degree perception of the surrounding environment is essential. However, most existing datasets lack the comprehensive multi-view data required for this full environmental coverage. This absence of large-scale, multi-view image datasets specifically designed for maritime situational awareness on vessels presents a significant challenge. To address this, we introduce the Multi-View Maritime Vision (MV2) dataset, comprising 159,386 visible-light images captured from six distinct viewpoints around a vessel. MV2 provides a complete 360-degree omnidirectional perspective, offering critical support for maritime situational awareness applications. The dataset includes object bounding boxes, along with semantic, instance, and panoptic segmentation labels, and encompasses a wide range of environmental conditions, supporting diverse computer-vision tasks. Additionally, we benchmarked state-of-the-art object-detection and panoptic-segmentation models on MV2, demonstrating its contribution to advancing maritime autonomy research. The dataset is available at https://sites.google.com/view/multi-view-maritime-vision.