Dissertation/Thesis Abstract

Robust and efficient inference of scene and object motion in multi-camera systems
by Sankaranarayanan, Aswin C., Ph.D., University of Maryland, College Park, 2009, 181; 3391385
Abstract (Summary)

Multi-camera systems have the ability to overcome some of the fundamental limitations of single camera based systems. Having multiple view points of a scene goes a long way in limiting the influence of field of view, occlusion, blur and poor resolution of an individual camera. This dissertation addresses robust and efficient inference of object motion and scene in multi-camera and multi-sensor systems.

The first part of the dissertation discusses the role of constraints introduced by projective imaging towards robust inference of multi-camera/sensor based object motion. We discuss the role of the homography and epipolar constraints for fusing object motion perceived by individual cameras. For planar scenes, the homography constraints provide a natural mechanism for data association. For scenes that are not planar, the epipolar constraint provides a weaker multi-view relationship. We use the epipolar constraint for tracking in multi-camera and multi-sensor networks. In particular, we show that the epipolar constraint reduces the dimensionality of the state space of the problem by introducing a "shared" state space for the joint tracking problem. This allows for robust tracking even when one of the sensors fail due to poor SNR or occlusion.

The second part of the dissertation deals with challenges in the computational aspects of tracking algorithms that are common to such systems. Much of the inference in the multi-camera and multi-sensor networks deal with complex non-linear models corrupted with non-Gaussian noise. Particle filters provide approximate Bayesian inference in such settings. We analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and in particular concentrate on implementations that have minimum processing times.

The last part of the dissertation deals with the efficient sensing paradigm of compressing sensing (CS) applied to signals in imaging, such as natural images and reflectance fields. We propose a hybrid signal model on the assumption that most real-world signals exhibit subspace compressibility as well as sparse representations. We show that several real-world visual signals such as images, reflectance fields, videos etc., are better approximated by this hybrid of two models. We derive optimal hybrid linear projections of the signal and show that theoretical guarantees and algorithms designed for CS can be easily extended to hybrid subspace-compressive sensing. Such methods reduce the amount of information sensed by a camera, and help in reducing the so called data deluge problem in large multi-camera systems.

Indexing (document details)
Advisor: Chellappa, Rama
Commitee: Krishnaprasad, P. S., Papamarcou, Adrian, Srivastava, Ankur, Varshney, Amitabh
School: University of Maryland, College Park
Department: Electrical Engineering
School Location: United States -- Maryland
Source: DAI-B 71/03, Dissertation Abstracts International
Subjects: Electrical engineering, Computer science
Keywords: Compressive sensing, Computer vision, Multicamera systems, Object tracking, Particle filters
Publication Number: 3391385
ISBN: 978-1-109-63453-2
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy