Dissertation/Thesis Abstract

Statistical Tests for Optimization Efficiency
by Boyles, Levi Beinarauskas, M.S., University of California, Irvine, 2010, 45; 1483253
Abstract (Summary)

Learning problems, such as linear or logistic regression, are typically formulated as optimization problems. Taking the view that the data are generated by a stochastic process, we argue that one should view optimization as a statistical procedure as well. By considering the statistical properties of the update variables used during the optimization (e.g. gradients), we can construct frequentist hypothesis tests to determine the reliability of these updates. Using these tests we determine the required batch-size for computing reliable parameter updates and let the batch grow as learning proceeds. This provides not only computational benefits but also avoids overfitting by stopping when the batch-size has become equal to size of the full dataset. In this paper, we illustrate these ideas on L1-regularized problems, achieving an algorithm that is roughly as efficient as stochastic gradient descent but has the additional advantages of having a natural stopping criterion and fewer and more natural meta-parameters to tune.

Indexing (document details)
Advisor: Welling, Max
Commitee: Ihler, Alexander, Ramanan, Deva
School: University of California, Irvine
Department: Computer Science - M.S.
School Location: United States -- California
Source: MAI 49/02M, Masters Abstracts International
Subjects: Computer science
Keywords: Large scale learning, Machine learning, Optimization
Publication Number: 1483253
ISBN: 978-1-124-35420-0
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy