Supervised Learning Has a Necessary Geometric Blind Spot: Theory, Consequences, and Minimal Repair
📰 ArXiv cs.AI
arXiv:2604.21395v1 Announce Type: cross Abstract: We prove that empirical risk minimisation (ERM) imposes a necessary geometric constraint on learned representations: any encoder that minimises supervised loss must retain non-zero Jacobian sensitivity in directions that are label-correlated in training data but nuisance at test time. This is not a contingent failure of current methods; it is a mathematical consequence of the supervised objective itself. We call this the geometric blind spot of s
DeepCamp AI