기본 정보
연구 분야
프로젝트
논문
구성원
preprint|
green
·인용수 0
·2026
When Wording Steers the Evaluation: Framing Bias in LLM judges
Yerin Hwang, Dongryeol Lee, Tae Jin Kang, Minwoo Lee, Kyomin Jung
arXiv (Cornell University)
초록

Large language models (LLMs) are known to produce varying responses depending on prompt phrasing, indicating that subtle guidance in phrasing can steer their answers. However, the impact of this framing bias on LLM-based evaluation, where models are expected to make stable and impartial judgments, remains largely underexplored. Drawing inspiration from the framing effect in psychology, we systematically investigate how deliberate prompt framing skews model judgments across four high-stakes evaluation tasks. We design symmetric prompts using predicate-positive and predicate-negative constructions and demonstrate that such framing induces significant discrepancies in model outputs. Across 14 LLM judges, we observe clear susceptibility to framing, with model families showing distinct tendencies toward agreement or rejection. These findings suggest that framing bias is a structural property of current LLM-based evaluation systems, underscoring the need for framing-aware protocols.

키워드
Framing (construction)Framing effectResponse biasCognitive bias
타입
preprint
IF / 인용수
- / 0
게재 연도
2026