Dissertation/Thesis Abstract

Randomized Fast Solvers for Linear and Nonlinear Problems in Data Science
by Wu, Huiwen, Ph.D., University of California, Irvine, 2019, 151; 27665431
Abstract (Summary)

We construct a preconditioner for solving the linear least square problems, which are simplest and most popular arising in data fitting, imaging processing and high dimension data analysis. The existed methods for solving least squares problems either has a large computational cost or depends highly on the condition number of the matrix. Recently, there is a surge of interest in developing randomized algorithms for solving least squares problems for the purpose of efficiency and scalability. We construct a new preconditioner equipped with sampling procedure to reduce computational complexity and apply Gauss Seidel iterations to grab the high frequency component of the solution, which reduces the dependence of performance of the conditioner number. Experimental studies compared with Conjugate Gradient Descent method (CG) are presented on six different simulations including dense Gaussian matrix, `semi Gaussian' matrix, Sparse random matrix, `UDV' matrix and Graph Laplacian matrix and a non-negative constrained problem to show the effectiveness of the proposed preconditioner.   

A general scheme for solving non-constraint convex and smooth minimization problem $ \min_{x \in \mathcal V} f(x)$ is developed in this thesis. The scheme does gradient descent on each subspace based on a stable space decomposition of $\mathcal V$. With assumptions of Lipschitz continuous of the gradient on $\mathcal V$ and its subspaces, convexity or strong convexity on $\mathcal V$, we prove linear convergence for strongly convex objective function for both non-uniform sampling and uniform sampling. For non-uniform sampling, the convergence depends on the expected condition number, and for uniform sampling, the convergence depends on the supreme condition number. Moreover, we also show sublinear convergence for convex function. Numerical examples on Nestrov's worst function and linear regression both outperform randomized coordinate method. We conclude that our scheme generalizes the gradient descent methods, randomized (block) coordinate descent methods and full approximation scheme.

Indexing (document details)
Advisor: Chen, Long
Commitee: Zhao, Hongkai, Xin, Jack
School: University of California, Irvine
Department: Mathematics - Ph.D.
School Location: United States -- California
Source: DAI-B 81/10(E), Dissertation Abstracts International
Subjects: Mathematics
Keywords: Convex optimization, Importance sampling, Randomized fast solver, Subspace decomposition
Publication Number: 27665431
ISBN: 9798607316822
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy