In many applications such as drone-based video surveillance, self driving cars and recognition under night-time and low-light conditions, the captured images and videos contain undesirable degradations such as haze, rain, snow, and noise. Furthermore, the performance of many computer vision algorithms often degrades when they are presented with images containing such artifacts. Hence, it is important to develop methods that can automatically remove these artifacts. However, these are difficult problems to solve due to their inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert them into well-posed problems. In this thesis, rather than purely relying on prior-based models, we propose to combine them with data-driven models for image restoration and translation. In particular, we develop new data-driven approaches for 1) single image de-raining, 2) single image dehazing, and 3) thermal-to-visible face synthesis.
In the first part of the thesis, we develop three different methods for single image deraining. In the first approach, we develop novel convolutional coding-based methods for single image de-raining, where two different types of filters are learned via convolutional sparse and low-rank coding to characterize the background component and rain-streak component separately. These pre-trained filters are then used to separate the rain component from the image.
In the second approach, to ensure that the restored de-rained results are indistinguishable from their corresponding clear images, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) which consists of a new refined perceptual loss function and a novel multi-scale discriminator. Finally, to deal with nonuniform rain densities, we present a novel density-aware multi-stream densely connected convolutional neural network-based algorithm that enables the network itself to automatically determine the rain-density information and then efficiently remove the corresponding rain-streaks guided by the estimated rain-density label.
In the final part of the thesis, we develop an image-to-image translation method for generating high-quality visible images from polarimetric thermal faces. Since polarimetric images contain dierent stokes images containing various polarization state information, we propose a Generative Adversarial Network-based multi-stream feature-level fusion technique to synthesize high-quality visible images from polarimetric thermal images. An application of this approach is presented in polarimetric thermal-to-visible cross-modal face recognition.
|Advisor:||Patel, Vishal M.|
|Commitee:||Dana, Kristin, Meer, Peter, Najafizadeh, Laleh, Zhou, Shaohua Kevin|
|School:||Rutgers The State University of New Jersey, School of Graduate Studies|
|Department:||Electrical and Computer Engineering|
|School Location:||United States -- New Jersey|
|Source:||DAI-B 80/11(E), Dissertation Abstracts International|
|Subjects:||Artificial intelligence, Computer science|
|Keywords:||Dehazing, Deraining, GAN|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be