기본 정보
연구 분야
프로젝트
발행물
구성원
article|
gold
·인용수 0
·2026
Diversified Prompting Strategy for Improving Slot Filling With Large Language Models
Mirr Shin, Youhyun Shin
IF 3.6IEEE Access
초록

We propose a diversified prompting strategy to address the challenges of slot filling with Large Language Models (LLMs), where recall often suffers from prediction omissions and precision declines due to duplicate or excessive slot assignments. Our strategy combines sub-prompt, which partitions candidate slots into smaller groups to improve recall, and multi-view prompt, which applies diverse structural prompt variations to the same utterance. Final slot predictions are selected through threshold-based majority voting, effectively balancing recall and precision. Experiments on three benchmark datasets (SNIPS, MASSIVE, and MultiWoz) with six LLMs (bloomz, falcon, llama2, llama3, qwen2, and gemma) show consistent improvements when compared with the baseline and all single-prompt methods. For example, on SNIPS, llama3-8B improves recall from 78.4 to 90.5 and F1 from 72.6 to 82.0. Additionally, we conducted experiments across various model sizes to confirm the general applicability of our methodology. These results demonstrate that the proposed diversified prompting strategy effectively restores balance among recall, precision, and F1, offering a scalable methodology for enhancing LLM-based slot filling.

키워드
Benchmark (surveying)Baseline (sea)ScalabilityRecallLanguage modelPrecision and recall
타입
article
IF / 인용수
3.6 / 0
게재 연도
2026