Video instance segmentation (VIS) is crucial for robotic perception but state-of-the-art transformer-based methods require computational resources for edge deployment. We propose Edge-VIS, the first knowledge distillation framework specifically designed for video instance segmentation, addressing the previously unexplored challenge of transferring sophisticated spatiotemporal reasoning, spatial segmentation fidelity, and temporal instance tracking consistency simultaneously from a transformer-based teacher (GenVIS-Swin-L, 1,229 M parameters) to a lightweight CNN-based student (modified YOLOv8n-seg, 2.7 M parameters) for real-time edge deployment in robotic systems. Our multilevel distillation strategy includes feature-level, reaction-level, and attention-based distillation to preserve both partition quality and temporal stability. On the challenging Occluded Video Instance Segmentation (OVIS) dataset emphasizing severe constraints, Edge-VIS achieves 46.0% mAP@0.5, while requiring only 14 GFLOPs (36.4% reduction) and 2.7 M parameters (15.6% reduction from baseline YOLOv8n-seg). With respect to the teacher model, Edge-VIS achieves 66.5% accuracy while being 455x more parameter efficient and achieving 10.4x faster inference (52 vs 5 FPS). The deployment on NVIDIA Jetson AGX Orin demonstrates real-time performance at 250 FPS with only 29 W power consumption, confirming practical feasibility for battery-powered robotic systems.