Dissertation/Thesis Abstract

A Highly Accurate and Reliable Data Fusion Framework for Guiding the Visually Impaired
by Elmannai, Wafa, Ph.D., University of Bridgeport, 2018, 192; 10931368
Abstract (Summary)

The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them.

Indexing (document details)
Advisor: Elleithy, Khaled
Commitee: Elleithy, Khaled, Faezipour, Miad, Guizani, Mohsen, Kongar, Elif, Xiong, Xingguo
School: University of Bridgeport
Department: Computer Science and Engineering
School Location: United States -- Connecticut
Source: DAI-B 80/01(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Computer Engineering, Biomedical engineering, Computer science
Keywords: Assistive wearable devices, Blindness, Computer vision systems, Mobility limitation, Obstacle detection and obstacle collision avoidance, Visual impairment
Publication Number: 10931368
ISBN: 9780438309593
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest