Fibers are ubiquitous in our visual world. Hair is an important part of our appearance, and we wear and use clothes made from various types of fibers. Computer graphics models that can accurately simulate light scattering in these materials have applications in the production of media such as movies and video games. They can also significantly lower the cost of textile design by allowing designers to design fabrics entirely in silico, render realistic images for feedback, and then fabricate final products that look exactly as designed.
Recent research has shown that renderings of the highest quality—those showing realistic reflectance and complex geometric details—can be obtained by modeling individual fibers. However, this approach raises many open problems. For hair, the effect of fiber cross sections on light scattering behavior has never been carefully studied. For textiles, several competing approaches for fiber-level modeling exist, and it has been unclear which is the best. Furthermore, there has been no general procedure for matching textile models to real fabric appearance, and rendering such models requires considerable computing resources. In this dissertation, we present solutions to these open problems.
Our first contribution is a light scattering model for human hair fibers that more accurately takes into account how light interacts with their elliptical cross sections. The model has been validated by a novel measurement device that captures light scattered from a single hair fiber much more efficiently than previous methods.
Our second contribution is a general and powerful optimization framework for estimating parameters of a large class of appearance models from observations of real materials, which greatly simplifies development and testing of such models. We used the framework to systematically identify best practices in fabric modeling, including how to represent geometry and which light scattering model to use for textile fibers.
Our third contribution is a fast, precomputation-based, GPU-friendly algorithm for approximately rendering fiber-level textile models under environment illumination. Using only a single commodity GPU, our implementation can render high-resolution, supersampled images of micron-resolution fabrics with multiple scattering in tens of seconds, compared to tens of core-hours required by CPU-based algorithms. Our algorithm makes fiber-level models practical for applications that require quick feedback, such as interactive textile design.
We expect these contributions will make realistic physically-based virtual prototyping a reality.
|Advisor:||Bala, Kavita, Marschner, Stephen R.|
|Commitee:||Kleinberg, Robert D., Strogatz, Steven H., Van Loan, Charles F.|
|School Location:||United States -- New York|
|Source:||DAI-B 78/11(E), Dissertation Abstracts International|
|Keywords:||Appearance modeling, Computer graphics, Light transport, Physical simulation, Reflectance, Rendering|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be