Dissertation/Thesis Abstract

An Egocentric Computer Vision-Based Robotic Wheelchair
by Kutbi, Mohammed, Ph.D., Stevens Institute of Technology, 2018, 131; 10979494
Abstract (Summary)

Motivated by the emerging need to improve the quality of life for the elderly and disabled individuals who rely on wheelchairs for mobility, and who may have limited or no hand functionality at all, we propose an egocentric computer vision based co-robot wheelchair to enhance their mobility without hand usage. The user can access 360 degrees of motion direction as well as a continuous range of speed using the proposed egocentric vision based control method. Compared with previous approaches to hands-free mobility, our system provides a more natural human robot interface. Three usability studies were conducted on 37 participants. Experimental results demonstrate the usability and efficiency of the proposed design to control a powered wheelchair compared with alternative options. A final long-term usability study was conducted to assess the effectiveness of training subjects to operate the wheelchair.

Research on robotic wheelchairs covers a broad range from complete autonomy to shared autonomy to manual navigation by a joystick or other means. Shared autonomy is valuable because it allows the user and the robot to complement each other, to correct each other's mistakes and to avoid collisions. In this thesis, we also present an approach that can learn to replicate path selection according to the wheelchair user's individual, often subjective, criteria in order to reduce the number of times the user has to intervene during automatic navigation. Simulations and laboratory experiments using two path generation strategies demonstrate the effectiveness of our approach. Inspired by recent efforts for generating synthetic training data, we investigate whether it is possible to train an actual robot in simulation to reduce the cost, time requirements and potential risk.

To enable effective autonomous navigation the perception system of the robot must perform motion analysis. In the last part of this thesis, we present an approach for motion clustering that is based on a novel observation, in which a signature for input pixel correspondences can be generated by collecting their residuals with respect to model hypotheses drawn randomly from the data. Inliers of the same motion cluster should have strongly correlated residuals, which are low when the hypothesis is consistent with the data in the cluster and high otherwise. Because of this property, we named our approach Inlier Clustering based on the Residuals of Random Hypotheses ( ICR). An important advantage of ICR is that it does not require an inlier-outlier threshold or parameter tuning. In addition, we propose a supervised recursive formulation of ICR (r-ICR) that does not require the number of clusters to be known a priori, as long as a small amount of training data is available. r-ICR removes another important limitation of many motion clustering methods, namely the need to pre-specify the number of clusters. We validate the new approach on the AdelaideRMF dataset for robust geometric model fitting.

Indexing (document details)
Advisor: Mordohai, Philippos
Commitee: Dunn, Enrique, Englot, Brendan, Kleinberg, Samantha, Wang, Xinchao
School: Stevens Institute of Technology
Department: Computer Science
School Location: United States -- New Jersey
Source: DAI-B 80/06(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Robotics, Computer science
Keywords: Computer vision, Learning from demonstration, Learning in simulation, Motion segmentation, Shared control, Wheelchair
Publication Number: 10979494
ISBN: 978-0-438-83695-2
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest