Believably animated virtual human characters appear in many diverse fields of computer science research and areas of the technology industry, including video games, animated films, and virtual training environments. In these applications, animated virtual humans play roles ranging from friendly companions, to helpful tutors, to vicious villains. A key aspect that distinguishes these applications is user interaction. In video games and virtual training environments, interaction between virtual human characters and human users is required, in contrast to animated films. However, the animation methods that produce often very believable behavior for these animated films do not apply well to interactive domains. This leads to an expressivity gap between those animated virtual humans which are interactive and those which are not.
One ability that non-interactive animated characters possess and interactive characters do not is the ability to reveal important information about their emotional state through the use of glances or glares while the character remains silent. The goal of this research is to provide this ability to interactive characters. It does so through a gaze model capable of displaying a desired selection of physical behaviors while directing gaze towards an arbitrary target. This model of gaze, called the Expressive Gaze Model (EGM), combines body movement based on motion capture data with eye movement based on visual neuroscience data.
In addition, this research provides an empirically determined preliminary mapping between gaze behavior and emotional attribution. The results demonstrate that by obtaining a set of low-level gaze behaviors annotated with emotional data, and then generating gaze shifts through the composition of these low-level behaviors, the attribution of emotion to the resulting gaze shift can be predicted. This indicates that gaze shifts that display emotion states can be generated from these low-level gaze behaviors without using motion capture of the displayed emotion.
|Commitee:||Itti, Laurent, Rizzo, Albert, Tjan, Bosco|
|School:||University of Southern California|
|School Location:||United States -- California|
|Source:||DAI-B 69/12, Dissertation Abstracts International|
|Keywords:||Agents, Animation, Emotions, Gaze shifts, Procedural, Virtual embodied characters|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be