UNBOX: Unveiling Black-box visual models with Natural-language

📰 ArXiv cs.AI

arXiv:2603.08639v2 Announce Type: replace-cross Abstract: Ensuring trustworthiness in open-world visual recognition requires models that are interpretable, fair, and robust to distribution shifts. Yet modern vision systems are increasingly deployed as proprietary black-box APIs, exposing only output probabilities and hiding architecture, parameters, gradients, and training data. This opacity prevents meaningful auditing, bias detection, and failure analysis. Existing explanation methods assume w

Published 16 Apr 2026
Read full paper → ← Back to Reads