The ability to reason about different modalities of information, for the purpose of physical interaction with objects, is a critical skill for assistive robots. For a robot to be able to assist us in our daily lives, it is not feasible to train each robot for a large number of tasks with all instances of objects that exist in human environments. Robots will have to generalize their skills by jointly reasoning with various sensor modalities such as vision, language and haptic feedback. This is an extremely challenging problem because each modality has intrinsically different statistical properties. Moreover, even with expert knowledge, manually designing joint features between such disparate modalities is difficult.
In this dissertation, we focus on developing learning algorithms for robots that model tasks involving interactions with various objects in unstructured human environments—especially on novel objects and scenarios that involve sequences of complicated manipulation. To this end, we develop algorithms that learn shared representations of multimodal data and model full sequences of complex motions. We demonstrate our approach on several different applications: understanding human activities in unstructured environment, synthesizing manipulation sequences for under-specified tasks, manipulating novel appliances, and manipulating objects with haptic feedback.
|Commitee:||Guimbretiere, Francois, Marschner, Steve, Salisbury, J. Kenneth, Selman, Bart|
|School Location:||United States -- New York|
|Source:||DAI-B 78/11(E), Dissertation Abstracts International|
|Subjects:||Robotics, Artificial intelligence, Computer science|
|Keywords:||Deep learning, Machine learning, Multimodal data, Robot learning, Robotic manipulation, Robotics|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be