Dissertation/Thesis Abstract

Trust-Region Methods for Unconstrained Optimization Problems
by Rezapour, Mostafa, Ph.D., Washington State University, 2020, 232; 27997808
Abstract (Summary)

We present trust-region methods for the general unconstrained minimization problem. Trust-region algorithms iteratively minimize a model of the objective function within the trust-region and update the size of the region to find a first-order stationary point for the objective function. The radius of the trust-region is updated based on the agreement between the model and the objective function at the new trial point. The efficiency of the trust-region algorithms depends significantly on the size of the trust-region, the agreement between the model and the objective function and the model value reduction at each step. The size of the trust-region at each step plays a key role in the efficiency of the trust-region algorithm, particularly for large scale problems, because constructing and minimizing the model at each step requires gradient and Hessian information of the objective function. If the trust-region is too small or too large, then more models must be constructed and minimized, which is computationally expensive.

‎We propose two adaptive trust-region algorithms that explore beyond the trust region if the boundary of the region prevents the algorithm from accepting a more beneficial point. It occurs when there is very good agreement between the model and the objective function on the trust-region boundary and we can find a step outside the trust-region with smaller model value while maintaining good agreement between the model and the objective function.‎‎

We also take a different approach to derivative-free unconstrained optimization problems, where the objective function is possibly nonsmooth. We do an exploratory study by using deep neural-networks and their well-known capability as universal function approximator. We propose and investigate two derivative-free trust-region methods for solving unconstrained minimization problems, where we employ artificial neural-networks to construct a model within the trust-region. We directly find an estimate of the objective function minimizer without explicitly constructing a model function through a parent-child neural-network. This approach may provide improved practical performance in cases where the objective function is extremely noisy or stochastic. We provide a framework for future work in this area.

‎

Indexing (document details)
Advisor: Asaki, Thomas J.
Commitee: Dasgupta, Nairanjana, Dong, Hongbo
School: Washington State University
Department: Mathematics
School Location: United States -- Washington
Source: DAI-B 82/5(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Applied Mathematics, Theoretical Mathematics
Keywords: Deep learning, Derivative-free optimization, Nonlinear optimization, Trust-region methods, Universal Approximation Theorem
Publication Number: 27997808
ISBN: 9798698541950
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest