Universality Patterns in the Training of Neural Networks

Presented at Microsoft Research Lab - India, Bengaluru, India, 2019

This work proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, \(0/1\) error etc.) evaluated for a model arising at (any point of) a training run. This pattern is universal in the sense that this one to one relationship is identical across architectures (such as VGG, Resnet, Densenet etc.), algorithms (SGD and SGD with momentum) and training loss functions (cross entropy and mean squared error).

This work was submited in the Deep Phenomena workshop of ICML 2019. The pdf of the work is available here.

Leave a Comment