기본 정보
연구 분야
프로젝트
발행물
구성원
article|
gold
·인용수 1
·2025
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
Sangyeon Yu, Gyunyeop Kim, Sangwoo Kang
IF 2.2Mathematics
초록

Large language models, despite their strong performance, frequently produce hallucinated content due to excessive reliance on pre-trained knowledge while insufficiently integrating newly provided context. We introduce LACD, a technique that dynamically rebalances probability distributions across layers, ensuring critical context is not overshadowed. By emphasizing new prompt information, LACD alleviates lower-layer dominance and mitigates hallucinations. On the HotPotQA dataset, LACD outperforms basic context injection baselines by approximately 2.2% in exact match (EM) and matches or exceeds advanced methods such as DoLa and CAD. LACD also demonstrates robust gains on SQuAD, underscoring its capacity to reduce hallucinations while improving factual consistency. Overall, these findings highlight the importance of carefully integrating newly provided context with pre-trained knowledge to achieve more reliable language generation.

키워드
Harmony (color)Context (archaeology)PsychologyComputer scienceCognitive scienceGeographyArt
타입
article
IF / 인용수
2.2 / 1
게재 연도
2025