발행물
컨퍼런스
SUMMER SEMINAR
2001.07
,
INSIDE: LLMS’ INTERNAL STATES RETAIN THE POWER OF HALLUCINATION DETECTION
On Large Language Models’ Hallucination with Regard to Known Facts
ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
LLM Comparative Assessment Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models
Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation