What do your logits know? (The answer may surprise you!)
📰 ArXiv cs.AI
arXiv:2604.09885v1 Announce Type: new Abstract: Recent work has shown that probing model internals can reveal a wealth of information not apparent from the model generations. This poses the risk of unintentional or malicious information leakage, where model users are able to learn information that the model owner assumed was inaccessible. Using vision-language models as a testbed, we present the first systematic comparison of information retained at different "representational levels'' as it is
DeepCamp AI