기본 정보
연구 분야
발행물
구성원
article|
green
·인용수 1
·2024
CPR: Mitigating Large Language Model Hallucinations with Curative Prompt Refinement
Jung‐Woo Shim, Yeong-Joon Ju, Ji Hoon Park, Seong-Whan Lee
초록

Recent advancements in large language models (LLMs) highlight their fluency in generating responses to diverse prompts. However, these models sometimes generate plausible yet incorrect “hallucinated” facts, undermining trust. A frequent but often overlooked cause of such errors is the use of poorly structured or vague prompts by users, leading LLMs to base responses on assumed rather than actual intentions. To mitigate hallucinations induced by these ill-formed prompts, we introduce Curative Prompt Refinement (CPR), a plug-and-play framework for curative prompt refinement that 1) cleans ill-formed prompts, and 2) generates additional informative task descriptions to align the intention of the user and the prompt using a fine-tuned small language model. When applied to language models, we discover that CPR significantly increases the quality of generation while also mitigating hallucination. Empirical studies show that prompts with CPR applied achieves over a 90 % win rate over the original prompts without any external knowledge.

키워드
Computer scienceNatural language processing
타입
article
IF / 인용수
- / 1
게재 연도
2024