## Scaling laws of optimization algorithms for Deep Learning - the Graphon perspective

Research project, University of Washington, Seattle, WA, USA

Stochastic optimization algorithms are used to train large (Deep) Neural Networks ((D)NNs). The non-linear training dynamics of two-layer NNs can be described as a mean-field interacting particle system where the “particles” are the neurons in the single hidden layer. Wasserstein gradient flows often arise from such mean-field interactions among exchangeable particles. This relies on the observation that the permutation symmetry of the neurons in the hidden layer allows the problem to be viewed as an optimization problem over probability measures. Going beyond, multi-layer NNs can be considered as large computational graphs and therefore can possess different groups of symmetries. This body of work aims to describe analytical scaling limits of stochastic optimization algorithms as the size of the network grows. We use and develop on the existing theory of exchangeable arrays, graphons (analytical limits of dense graphs), the general theory of gradient flows on metric spaces, and insights from propagation of chaos to characterize this scaling limit. We discover a generalized notion of the McKean-Vlasov equation on graphons where the phenomenon of *propagation of chaos* holds. In the asymptotically zero-noise case, this limit is a gradient flow on the metric space of graphons.