We present trust-region methods for the general unconstrained minimization problem. Trust-region algorithms iteratively minimize a model of the objective function within the trust-region and update the size of the region to find a first-order stationary point for the objective function. The radius of the trust-region is updated based on the agreement between the model and the objective function at the new trial point. The efficiency of the trust-region algorithms depends significantly on the size of the trust-region, the agreement between the model and the objective function and the model value reduction at each step. The size of the trust-region at each step plays a key role in the efficiency of the trust-region algorithm, particularly for large scale problems, because constructing and minimizing the model at each step requires gradient and Hessian information of the objective function. If the trust-region is too small or too large, then more models must be constructed and minimized, which is computationally expensive.
We propose two adaptive trust-region algorithms that explore beyond the trust region if the boundary of the region prevents the algorithm from accepting a more beneficial point. It occurs when there is very good agreement between the model and the objective function on the trust-region boundary and we can find a step outside the trust-region with smaller model value while maintaining good agreement between the model and the objective function.
We also take a different approach to derivative-free unconstrained optimization problems, where the objective function is possibly nonsmooth. We do an exploratory study by using deep neural-networks and their well-known capability as universal function approximator. We propose and investigate two derivative-free trust-region methods for solving unconstrained minimization problems, where we employ artificial neural-networks to construct a model within the trust-region. We directly find an estimate of the objective function minimizer without explicitly constructing a model function through a parent-child neural-network. This approach may provide improved practical performance in cases where the objective function is extremely noisy or stochastic. We provide a framework for future work in this area.
|Advisor:||Asaki, Thomas J.|
|Commitee:||Dasgupta, Nairanjana, Dong, Hongbo|
|School:||Washington State University|
|School Location:||United States -- Washington|
|Source:||DAI-B 82/5(E), Dissertation Abstracts International|
|Subjects:||Applied Mathematics, Theoretical Mathematics|
|Keywords:||Deep learning, Derivative-free optimization, Nonlinear optimization, Trust-region methods, Universal Approximation Theorem|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be