In recent years, deep learning has seen a monumental growth in interest and research, and its applications have the potential to solve difficult analytical problems. This is particularly true for medical imaging, where analytical tasks can be time consuming, tedious, and require trained professionals, often incurring significant costs. One such task is that of segmenting vascular networks in retinal images. The state of retinal vascular networks plays an important role in ophthalmology where its analysis is the key in the early detection and diagnosis of various diseases. Segmentation is primarily challenging due to issues such as the low contrast of images, variety of vessels and potential pathologies.
To approach this task, experiments will integrate numerous techniques and designs from proven architectures into deep neural networks. Training and testing will be performed on two publicly available retinal datasets – DRIVE and STARE. When evaluated on each dataset, many of the networks achieve promising results, with one proposed variation of the U-Net architecture even surpassing the evaluated accuracy of its predecessors – benchmarking at 98.019% AUC ROC compared to the previous best of 97.9%.
|Commitee:||Aliasgari, Mehrdad, Fu, Bo|
|School:||California State University, Long Beach|
|Department:||Computer Engineering and Computer Science|
|School Location:||United States -- California|
|Source:||MAI 81/1(E), Masters Abstracts International|
|Subjects:||Computer science, Artificial intelligence|
|Keywords:||Deep learning, Machine learning, Segmentation|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be