Dissertation/Thesis Abstract

Bad Optimizations Make Good Learning
by Chen, Ziqi, M.S., University of California, Santa Cruz, 2012, 40; 1508167
Abstract (Summary)

This thesis reports on experiments aimed at explaining why machine learning algorithms using the greedy stochastic gradient descent (SGD) algorithm sometimes generalize better than algorithms using other optimization techniques. We propose two hypothesis, namely the "canyon effect" and the “classification insensitivity'', and illustrate them with two data sources. On these data sources, SGD generalizes more accurately than SVMperf, which performs more intensive optimization, over a wide variety of choices of the regularization parameters. Finally, we report on some similar, but predictably less dramatic, effects on natural data.

Indexing (document details)
Advisor: Helmbold, David P.
Commitee: Long, Philip M., Warmuth, Manfred K.
School: University of California, Santa Cruz
Department: Computer Science
School Location: United States -- California
Source: MAI 50/05M, Masters Abstracts International
Source Type: DISSERTATION
Subjects: Computer science
Keywords: Classifications, Learning, Optimizations, Stochastic gradient descent
Publication Number: 1508167
ISBN: 9781267262004
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest