기본 정보
연구 분야
프로젝트
발행물
구성원
article|
gold
·인용수 1
·2025
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Simon A. Aytes, Jinheon Baek, Sung Ju Hwang
초록

Recent advances in large language models (LLMs) have enabled strong reasoning capabilities through Chain-of-Thought (CoT) prompting, which elicits step-by-step problem solving, but often at the cost of excessive verbosity in intermediate outputs, leading to increased computational overhead.We propose Sketch-of-Thought (SoT), a prompting framework that integrates cognitively inspired reasoning paradigms with linguistic constraints to reduce token usage while preserving reasoning accuracy.SoT is designed as a flexible, modular approach and is instantiated with three paradigms-Conceptual Chaining, Chunked Symbolism, and Expert Lexicons-each tailored to distinct reasoning tasks and selected dynamically at test-time by a lightweight routing model.Across 18 reasoning datasets spanning multiple domains, languages, and modalities, SoT achieves token reductions of up to 84% with minimal accuracy loss.In tasks such as mathematical and multi-hop reasoning, it even improves accuracy while shortening outputs.

키워드
Key (lock)Field (mathematics)Matching (statistics)Perspective (graphical)Case-based reasoning
타입
article
IF / 인용수
- / 1
게재 연도
2025