Automatic age estimation from real-world and wild face images is a challenging task and has an increasing importance due to its wide range of applications in current and future lifestyles. As a result of increasing age specific human-computer interactions, it is expected that computerized systems should be capable of estimating the age from face images and respond accordingly. Over the past decade, many research studies have been conducted on automatic age estimation from face images.
In this research, new approaches for enhancing age classification of a person from face images based on deep neural networks (DNNs) are proposed. The work shows that pre-trained CNNs which were trained on large benchmarks for different purposes can be retrained and fine-tuned for age estimation from unconstrained face images. Furthermore, an algorithm to reduce the dimension of the output of the last convolutional layer in pre-trained CNNs to improve the performance is developed. Moreover, two new jointly fine-tuned DNNs frameworks are proposed. The first framework fine-tunes tow DNNs with two different feature sets based on the element-wise summation of their last hidden layer outputs. While the second framework fine-tunes two DNNs based on a new cost function. For both frameworks, each has two DNNs, the first DNN is trained by using facial appearance features that are extracted by a well-trained model on face recognition, while the second DNN is trained on features that are based on the superpixels depth and their relationships.
Furthermore, a new method for selecting robust features based on the power of DNN and l21-norm is proposed. This method is mainly based on a new cost function relating the DNN and the L21 norm in one unified framework. To learn and train this unified framework, the analysis and the proof for the convergence of the new objective function to solve minimization problem are studied. Finally, the performance of the proposed jointly fine-tuned networks and the proposed robust features are used to improve the age estimation from the facial images. The facial features concatenated with their corresponding robust features are fed to the first part of both networks and the superpixels features concatenated with their robust features are fed to the second part of the network
Experimental results on a public database show the effectiveness of the proposed methods and achieved the state-of-art performance on a public database.
|Advisor:||Barkana, Buket D.|
|Commitee:||Faezipour, Miad, Gupta, Navarun, Rusu, Amalia, Xiong, Xingguo|
|School:||University of Bridgeport|
|Department:||Computer Science and Engineering|
|School Location:||United States -- Connecticut|
|Source:||DAI-B 79/07(E), Dissertation Abstracts International|
|Subjects:||Artificial intelligence, Computer science|
|Keywords:||Automatic age estimation, Deep neural networks, Real-world, Wild face, images|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be