발행물
컨퍼런스
SUMMER SEMINAR
2001.08
,
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
RAFT: Adapting Language Model to Domain Specific RAG