Dissertation/Thesis Abstract

Finding Critical and Gradient-Flat Points of Deep Neural Network Loss Functions
by Frye, Charles Gearhart, Ph.D., University of California, Berkeley, 2020, 90; 27957645
Abstract (Summary)

Despite the fact that the loss functions of deep neural networks are highly non-convex,

gradient-based optimization algorithms converge to approximately the same performance

from many random initial points. This makes neural networks easy to train, which, combined with their high representational capacity and implicit and explicit regularization

strategies, leads to machine-learned algorithms of high quality with reasonable computational cost in a wide variety of domains.

One thread of work has focused on explaining this phenomenon by numerically characterizing the local curvature at critical points of the loss function, where gradients are zero.

Such studies have reported that the loss functions used to train neural networks have no

local minima that are much worse than global minima, backed up by arguments from

random matrix theory. More recent theoretical work, however, has suggested that bad

local minima do exist.

In this dissertation, we show that one cause of this gap is that the methods used to

numerically find critical points of neural network losses suffer, ironically, from a bad local

minimum problem of their own. This problem is caused by gradient-flat points, where

the gradient vector is in the kernel of the Hessian matrix of second partial derivatives.

At these points, the loss function becomes, to second order, linear in the direction of the

gradient, which violates the assumptions necessary to guarantee convergence for secondorder critical point-finding methods. We present evidence that approximately gradient-flat

points are a common feature of several prototypical neural network loss functions.

Indexing (document details)
Advisor: Bouchard, Kristofer E
Commitee: DeWeese, Michael, Olshausen, Bruno, Hardt, Mortiz
School: University of California, Berkeley
Department: Neuroscience
School Location: United States -- California
Source: DAI-B 82/4(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Computer science, Neurosciences, Applied Mathematics
Keywords: Critical points, Deep learning, Neural networks, Optimization
Publication Number: 27957645
ISBN: 9798678171931
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest