Dissertation/Thesis Abstract

Feature Selection by Iterative Reweighting: An Exploration of Algorithms for Linear Models and Random Forests
by Jaiantilal, Abhishek, Ph.D., University of Colorado at Boulder, 2013, 164; 3592307
Abstract (Summary)

In many areas of machine learning and data science, the available data are represented as vectors of feature values. Some of these features are useful for prediction, but others are spurious or redundant. Feature selection is commonly used to determine the utility of a feature. Typically, features are selected in an all-or-none fashion for inclusion in a model. We describe an alternative approach that has received little attention in the literature: determining the relative importance of features via continuous weights, and performing multiple iterations of model training to iteratively reweight features such that the least useful features eventually obtain a weight of zero. We explore feature selection by employing iterative reweighting for two classes of popular machine learning models: L1 penalized linear models and Random Forests.

Recent studies have shown that incorporating importance weights into L1 models leads to improvement in predictive performance in a single iteration of training. In Chapter 3, we advance the state-of-the-art by developing an alternative method for estimating feature importance based on subsampling. Extending the approach to multiple iterations of training, employing the importance weights from iteration n to bias the training on iteration n + 1 seems promising, but past studies yielded no benefit to iterative reweighting. In Chapter 4, we obtain a significant reduction of 7.48% in the error rate over standard L1 penalized algorithms, and nearly as large an improvement over alternative feature selection algorithms such as Adaptive Lasso, Bootstrap Lasso, and MSA-LASSO using our improved estimates of feature importance.

In Chapter 5, we consider iterative reweighting in the context of Random Forests and contrast this with a more standard backward-elimination technique that involves training models with the full complement of features and iteratively removing the least important feature. In parallel with this contrast, we also compare several measures of importance, including our own proposal based on evaluating models constructed with and without each candidate feature. We show that our importance measure yields both higher accuracy and greater sparsity than importance measures obtained without retraining models (including measures proposed by Breiman and Strobl), though at a greater computational cost.

Indexing (document details)
Advisor: Mozer, Michael C.
Commitee: Clauset, Aaron, Dukic, Vanja, Lv, Qin, Sankaranarayanan, Sriram
School: University of Colorado at Boulder
Department: Computer Science
School Location: United States -- Colorado
Source: DAI-B 74/12(E), Dissertation Abstracts International
Subjects: Statistics, Computer science
Keywords: Feature importance, Feature selection, Iterative reweighting, L1 l2 linear models, Machine learning, Random forests
Publication Number: 3592307
ISBN: 9781303333088
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy