Seeing Through Circuits: Faithful Mechanistic Interpretability for Vision Transformers

📰 ArXiv cs.AI

arXiv:2604.14477v1 Announce Type: new Abstract: Transparency of neural networks' internal reasoning is at the heart of interpretability research, adding to trust, safety, and understanding of these models. The field of mechanistic interpretability has recently focused on studying task-specific computational graphs, defined by connections (edges) between model components. Such edge-based circuits have been defined in the context of large language models, yet vision-based approaches so far only co

Published 17 Apr 2026
Read full paper → ← Back to Reads