Gradient flows on Graphons
Research project, University of Washington, Seattle, WA, USA
Understanding scaling limits of gradient flow processes on large unlabeled graphs. This problem is motivated by the problem of optimizing permutation invariant risk functions of (single layer and deep) Neural Networks. Theoretical aspects stem from the original theory of gradient flows on the Wasserstein space, which have been used to understand scaling limits of (stochstic) gradient descent ((S)GD) processes in the case of single hidden layer neural networks. There are also other related questions that are specific to the qualitative nature of the stochasticity (sub-gaissian vs heavy tailed) in the SGD process.