We propose a diversified prompting strategy to address the challenges of slot filling with Large Language Models (LLMs), where recall often suffers from prediction omissions and precision declines due to duplicate or excessive slot assignments. Our strategy combines sub-prompt, which partitions candidate slots into smaller groups to improve recall, and multi-view prompt, which applies diverse structural prompt variations to the same utterance. Final slot predictions are selected through threshold-based majority voting, effectively balancing recall and precision. Experiments on three benchmark datasets (SNIPS, MASSIVE, and MultiWoz) with six LLMs (bloomz, falcon, llama2, llama3, qwen2, and gemma) show consistent improvements when compared with the baseline and all single-prompt methods. For example, on SNIPS, llama3-8B improves recall from 78.4 to 90.5 and F1 from 72.6 to 82.0. Additionally, we conducted experiments across various model sizes to confirm the general applicability of our methodology. These results demonstrate that the proposed diversified prompting strategy effectively restores balance among recall, precision, and F1, offering a scalable methodology for enhancing LLM-based slot filling.