Loop Corrections to the Training and Generalization Errors of Random Feature Models
📰 ArXiv cs.AI
arXiv:2604.12827v1 Announce Type: cross Abstract: We investigate random feature models in which neural networks sampled from a prescribed initialization ensemble are frozen and used as random features, with only the readout weights optimized. Adopting a statistical-physics viewpoint, we study the training, test, and generalization errors beyond the mean-kernel approximation. Since the predictor is a nonlinear functional of the induced random kernel, the ensemble-averaged errors depend not only o
DeepCamp AI