The Persuasion Paradox: When LLM Explanations Fail to Improve Human-AI Team Performance
📰 ArXiv cs.AI
arXiv:2604.03237v1 Announce Type: cross Abstract: While natural-language explanations from large language models (LLMs) are widely adopted to improve transparency and trust, their impact on objective human-AI team performance remains poorly understood. We identify a Persuasion Paradox: fluent explanations systematically increase user confidence and reliance on AI without reliably improving, and in some cases undermining, task accuracy. Across three controlled human-subject studies spanning abstr
DeepCamp AI