I-CALM: Incentivizing Confidence-Aware Abstention for LLM Hallucination Mitigation

📰 ArXiv cs.AI

arXiv:2604.03904v1 Announce Type: cross Abstract: Large language models (LLMs) frequently produce confident but incorrect answers, partly because common binary scoring conventions reward answering over honestly expressing uncertainty. We study whether prompt-only interventions -- explicitly announcing reward schemes for answer-versus-abstain decisions plus humility-oriented normative principles -- can reduce hallucination risk without modifying the model. Our focus is epistemic abstention on fac

Published 7 Apr 2026
Read full paper → ← Back to News