Dissertation/Thesis Abstract

Latent Variable Modeling for Networks and Text: Algorithms, Models and Evaluation Techniques
by Foulds, James Richard, Ph.D., University of California, Irvine, 2014, 287; 3631094
Abstract (Summary)

In the era of the internet, we are connected to an overwhelming abundance of information. As more facets of our lives become digitized, there is a growing need for automatic tools to help us find the content we care about. To tackle the problem of information overload, a standard machine learning approach is to perform dimensionality reduction, transforming complicated high-dimensional data into a manageable, low-dimensional form. Probabilistic latent variable models provide a powerful and elegant framework for performing this transformation in a principled way. This thesis makes several advances for modeling two of the most ubiquitous types of online information: networks and text data.

Our first contribution is to develop a model for social networks as they vary over time. The model recovers latent feature representations of each individual, and tracks these representations as they change dynamically. We also show how to use text information to interpret these latent features.

Continuing the theme of modeling networks and text data, we next build a model of citation networks. The model finds influential scientific articles and the influence relationships between the articles, potentially opening the door for automated exploratory tools for scientists. The increasing prevalence of web-scale data sets provides both an opportunity and a challenge. With more data we can fit more accurate models, as long as our learning algorithms are up to the task. To meet this challenge, we present an algorithm for learning latent Dirichlet allocation topic models quickly, accurately and at scale. The algorithm leverages stochastic techniques, as well as the collapsed representation of the model. We use it to build a topic model on 4.6 million articles from the open encyclopedia Wikipedia in a matter of hours, and on a corpus of 1740 machine learning articles from the NIPS conference in seconds.

Finally, evaluating the predictive performance of topic models is an important yet computationally difficult task. We develop one algorithm for comparing topic models, and another for measuring the progress of learning algorithms for these models. The latter method achieves better estimates than previous algorithms, in many cases with an order of magnitude less computational effort.

Indexing (document details)
Advisor: Smyth, Padhraic
Commitee: Ihler, Alexander, Steyvers, Mark
School: University of California, Irvine
Department: Computer Science
School Location: United States -- California
Source: DAI-B 75/11(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Statistics, Computer Engineering, Computer science
Keywords: Latent dirichlet allocation, Latent variable models, Machine learning, Social networks, Text mining
Publication Number: 3631094
ISBN: 9781321093810
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest