기본 정보
연구 분야
프로젝트
논문
구성원
article|
green
·인용수 0
·2026
Judging Against the Reference: Uncovering Knowledge-Driven Failures in LLM-Judges on QA Evaluation
Dongryeol Lee, Yerin Hwang, Tae Jin Kang, Minwoo Lee, Younhyung Chae, Kyomin Jung
ArXiv.org
초록

While large language models (LLMs) are increasingly used as automatic judges for question answering (QA) and other reference-conditioned evaluation tasks, little is known about their ability to adhere to a provided reference. We identify a critical failure mode of such reference-based LLM QA evaluation: when the provided reference conflicts with the judge model's parametric knowledge, the resulting scores become unreliable, substantially degrading evaluation fidelity. To study this phenomenon systematically, we introduce a controlled swapped-reference QA framework that induces reference-belief conflicts. Specifically, we replace the reference answer with an incorrect entity and construct diverse pairings of original and swapped references with correspondingly aligned candidate answers. Surprisingly, grading reliability drops sharply under swapped references across a broad set of judge models. We empirically show that this vulnerability is driven by judges' over-reliance on parametric knowledge, leading judges to disregard the given reference under conflict. Finally, we find that this failure persists under common prompt-based mitigation strategies, highlighting a fundamental limitation of LLM-as-a-judge evaluation and motivating reference-based protocols that enforce stronger adherence to the provided reference.

키워드
Grading (engineering)Construct (python library)Parametric statisticsSet (abstract data type)Reliability (semiconductor)Consistency (knowledge bases)Redundancy (engineering)
타입
article
IF / 인용수
- / 0
게재 연도
2026