Dissertation/Thesis Abstract

Duality and Data Dependence in Boosting
by Telgarsky, Matus, Ph.D., University of California, San Diego, 2013, 194; 3587597
Abstract (Summary)

Boosting algorithms produce accurate predictors for complex phenomena by welding together collections of simple predictors. In the classical method AdaBoost, as well as its immediate variants, the welding points are determined by convex optimization; unlike typical applications of convex optimization in machine learning, however, the AdaBoost scheme eschews the usual regularization and constraints used to control numerical and statistical properties.

On the other hand, the data and simple predictors impose rigid structure on the behavior of AdaBoost variants, and moreover convex duality provides a lens to resolve this rigidity. This structure is fundamental to the properties of these methods, and in particular leads to numerical and statistical convergence rates.

Indexing (document details)
Advisor: Dasgupta, Sanjoy
Commitee: Chaudhuri, Kamalika, Fitzsimmons, Patrick, Freund, Yoav, Gill, Philip, Schapire, Robert
School: University of California, San Diego
Department: Computer Science and Engineering
School Location: United States -- California
Source: DAI-B 74/11(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Statistics, Computer science
Keywords: Boosting, Convex analysis, Data dependence, Duality, Machine learning
Publication Number: 3587597
ISBN: 9781303249082
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest