2D images and 3D LIDAR range scans provide very different but complementing information about a single subject and, when registered, can be used for a variety of exciting applications. Video sets can be fused with a 3D model and played in a single multi-dimensional environment. Imagery with temporal changes can be visualized simultaneously, unveiling changes in architecture, foliage, and human activity. Depth information for 2D photos and videos can be computed. Real-world measurements can be provided to users through simple interactions with traditional photographs. However, fusing multi-modality data is a very challenging task given the repetition and ambiguity that often occur in man-made scenes as well as the variety of properties different renderings of the same subject can possess. Image sets collected over a period of time during which the lighting conditions and scene content may have changed, different artistic renderings, varying sensor types, focal lengths, and exposure values can all contribute to visual variations in data sets. This dissertation addresses these obstacles using the common theme of incorporating contextual information to visualize regional properties that intuitively exist in each imagery source. We combine hard features that quantify the strong, stable edges that are often present in imagery along object boundaries and depth changes with soft features that capture distinctive texture information that can be unique to specific areas. We show that our detector and descriptor techniques can provide more accurate keypoint match sets between highly varying imagery than many traditional and state-of-the-art techniques, allowing us to fuse and align photographs, videos, and range scans containing both man-made and natural content.
|School:||University of Missouri - Columbia|
|School Location:||United States -- Missouri|
|Source:||DAI-B 78/11(E), Dissertation Abstracts International|
|Keywords:||2D-3D fusion, Computer vision, Image matching, LIDAR, Multi-modality, Registration|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be