Robust Explanations for User Trust in Enterprise NLP Systems

📰 ArXiv cs.AI

arXiv:2604.12069v1 Announce Type: cross Abstract: Robust explanations are increasingly required for user trust in enterprise NLP, yet pre-deployment validation is difficult in the common case of black-box deployment (API-only access) where representation-based explainers are infeasible and existing studies provide limited guidance on whether explanations remain stable under real user noise, especially when organizations migrate from encoder classifiers to decoder LLMs. To close this gap, we prop

Published 15 Apr 2026
Read full paper → ← Back to Reads