기본 정보
연구 분야
프로젝트
논문
구성원
preprint|
green
·인용수 0
·2024
Enhancing Analogical Reasoning in the Abstraction and Reasoning Corpus via Model-Based RL
Jihwan Lee, Woochang Sim, Sejin Kim, Sundong Kim
arXiv (Cornell University)
초록

This paper demonstrates that model-based reinforcement learning (model-based RL) is a suitable approach for the task of analogical reasoning. We hypothesize that model-based RL can solve analogical reasoning tasks more efficiently through the creation of internal models. To test this, we compared DreamerV3, a model-based RL method, with Proximal Policy Optimization, a model-free RL method, on the Abstraction and Reasoning Corpus (ARC) tasks. Our results indicate that model-based RL not only outperforms model-free RL in learning and generalizing from single tasks but also shows significant advantages in reasoning across similar tasks.

키워드
AbstractionAnalogical reasoningComputer scienceModel-based reasoningDeductive reasoningReasoning systemArtificial intelligenceAutomated reasoningOpportunistic reasoningAnalytic reasoning
타입
preprint
IF / 인용수
- / 0
게재 연도
2024

주식회사 디써클

대표 장재우,이윤구서울특별시 강남구 역삼로 169, 명우빌딩 2층 (TIPS타운 S2)대표 전화 0507-1312-6417이메일 info@rndcircle.io사업자등록번호 458-87-03380호스팅제공자 구글 클라우드 플랫폼(GCP)

© 2026 RnDcircle. All Rights Reserved.