Dissertation/Thesis Abstract

Enabling increased complexity for realistic image synthesis
by Budge, Brian Christopher, Ph.D., University of California, Davis, 2009, 136; 3396900
Abstract (Summary)

This dissertation discusses work that enables the realistic image synthesis of very complex scenes. Some of the work herein describes approaches for modeling more complicated scenes, while other portions describe algorithms for accelerating the rendering of complex scenes. First we propose a simple method for modeling and rendering refractive objects that are nested within each other. The technique allows the use of simpler scene geometry and can even improve rendering time in some images. The algorithm can be easily added into an existing ray tracer and makes no assumptions about the drawing primitives that have been implemented. This allows for arbitrary nesting of dielectric objects while still maintaining correctness of refraction. This technique makes such a modeling task fairly trivial compared to what modelers typically endure to allow these effects.

A complementary technique described in this dissertation enables fast unbiased rendering of caustics, an effect that occurs when light bounces off of shiny objects and hits diffusive objects. The method uses importance-sampled Path Tracing with Caustic Forecasting. The technique is part of a straightforward rendering scheme that extends the Illumination by Weak Singularities method to allow fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions contributing similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering, clusters are used to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting.

We also investigate the RBSP, an acceleration structure related to k-d trees and BSP trees, in an attempt to speed up the primary building block of these rendering algorithms: ray tracing. The build algorithm uses a dynamic programming technique to compute coefficients that allow efficient calculation of the surface area heuristic. This algorithm reduces asymptotic runtime, and has significant impact on tree building time. Additionally, several simple observations are made that lead to very fast ray-tree traversal of RBSPs. Our new traversal algorithm is based on state-of-the-art k -d tree traversal algorithms, and effectively increases the speed of ray tracing RBSPs by an order of magnitude. Throughout the build algorithm, we pay special attention to robustness of the trees built. We analyze the intrinsic properties of the discrete oriented polytopes needed to build the data structure, and by exploiting these characteristics, we can build robust, high quality trees even faster. The result is a data structure which is capable of outperforming the k-d tree for ray tracing.

Another potential way to speed up ray tracing is through the use of alternative computing resources. We present a GPU ray tracing system for the ray tracing of both static and dynamic scenes. An optimized GPU-based ray tracing approach is described within the CUDA framework that does not explicitly make use of ray coherency or architectural specifics and is therefore simple to implement, while still exceeding performance of previous approaches. We achieve optimal performance by empirically tuning the ray tracing kernel to the executing hardware. We additionally describe a straightforward parallel approach for approximate quality k-d tree construction, aimed at multicore CPUs. The resulting hybrid ray tracer is able to render fully dynamic scenes with hundreds of thousands of triangles at interactive speeds. We discuss implementation details, analyze the performance, and provide a comparison to prior work.

Finally, we have designed a rendering system to enable the fast rendering of highly complex out-of-core scenes. The system is built to allow hybrid GPU-CPU tasks, enabling the use of the aforementioned GPU ray tracing algorithm. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out-of-core scheduling and data-management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture and the design decisions of the system’s data-management layer. Then, we outline an efficient implementation of a path tracer application where GPUs perform functions such as ray tracing, shadow tracing, importance-driven light sampling, and surface shading. GPUs accelerate the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability.

Indexing (document details)
Advisor: Joy, Kenneth I.
Commitee: Max, Nelson, Owens, John D.
School: University of California, Davis
Department: Computer Science
School Location: United States -- California
Source: DAI-B 71/03, Dissertation Abstracts International
Subjects: Computer science
Keywords: Computer graphics, Global illumination, Ray tracing, Spatial data structure
Publication Number: 3396900
ISBN: 978-1-109-66285-6
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy