기본 정보
연구 분야
프로젝트
논문
구성원
article|
인용수 36
·2022
TransDSSL: Transformer Based Depth Estimation via Self-Supervised Learning
Daechan Han, Jeongmin Shin, Namil Kim, Soonmin Hwang, Yukyung Choi
IF 5.3IEEE Robotics and Automation Letters
초록

Recently, transformers have been widely adopted for various computer vision tasks and show promising results due to their ability to encode long-range spatial dependencies in an image effectively. However, very few studies on adopting transformers in self-supervised depth estimation have been conducted. When replacing the CNN architecture with the transformer in self-supervised learning of depth, we encounter several problems such as problematic multi-scale photometric loss function when used with transformers and, insufficient ability to capture local details. In this letter, we propose an attention-based decoder module, Pixel-Wise Skip Attention (PWSA), to enhance fine details in feature maps while keeping global context from transformers. In addition, we propose utilizing self-distillation loss with single-scale photometric loss to alleviate the instability of transformer training by using correct training signals. We demonstrate that the proposed model performs accurate predictions on large objects and thin structures that require global context and local details. Our model achieves state-of-the-art performance among the self-supervised monocular depth estimation methods on KITTI and DDAD benchmarks.

키워드
Computer scienceTransformerArtificial intelligenceMonocularPixelMachine learningPattern recognition (psychology)Computer visionEngineeringVoltage
타입
article
IF / 인용수
5.3 / 36
게재 연도
2022

주식회사 디써클

대표 장재우,이윤구서울특별시 강남구 역삼로 169, 명우빌딩 2층 (TIPS타운 S2)대표 전화 0507-1312-6417이메일 info@rndcircle.io사업자등록번호 458-87-03380호스팅제공자 구글 클라우드 플랫폼(GCP)

© 2026 RnDcircle. All Rights Reserved.