Noise-Aware In-Context Learning for Hallucination Mitigation in ALLMs
📰 ArXiv cs.AI
arXiv:2604.09021v1 Announce Type: cross Abstract: Auditory large language models (ALLMs) have demonstrated strong general capabilities in audio understanding and reasoning tasks. However, their reliability is still undermined by hallucination issues. Existing hallucination evaluation methods are formulated as binary classification tasks, which are insufficient to characterize the more complex hallucination patterns that arise in generative tasks. Moreover, current hallucination mitigation strate
DeepCamp AI