COMING SOON! PQDT Open is getting a new home!

ProQuest Open Access Dissertations & Theses will remain freely available as part of a new and enhanced search experience at

Questions? Please refer to this FAQ.

Dissertation/Thesis Abstract

Procedural animation of emotionally expressive gaze shifts in virtual embodied characters
by Lance, Brent Jason, Ph.D., University of Southern California, 2008, 159; 3341714
Abstract (Summary)

Believably animated virtual human characters appear in many diverse fields of computer science research and areas of the technology industry, including video games, animated films, and virtual training environments. In these applications, animated virtual humans play roles ranging from friendly companions, to helpful tutors, to vicious villains. A key aspect that distinguishes these applications is user interaction. In video games and virtual training environments, interaction between virtual human characters and human users is required, in contrast to animated films. However, the animation methods that produce often very believable behavior for these animated films do not apply well to interactive domains. This leads to an expressivity gap between those animated virtual humans which are interactive and those which are not.

One ability that non-interactive animated characters possess and interactive characters do not is the ability to reveal important information about their emotional state through the use of glances or glares while the character remains silent. The goal of this research is to provide this ability to interactive characters. It does so through a gaze model capable of displaying a desired selection of physical behaviors while directing gaze towards an arbitrary target. This model of gaze, called the Expressive Gaze Model (EGM), combines body movement based on motion capture data with eye movement based on visual neuroscience data.

In addition, this research provides an empirically determined preliminary mapping between gaze behavior and emotional attribution. The results demonstrate that by obtaining a set of low-level gaze behaviors annotated with emotional data, and then generating gaze shifts through the composition of these low-level behaviors, the attribution of emotion to the resulting gaze shift can be predicted. This indicates that gaze shifts that display emotion states can be generated from these low-level gaze behaviors without using motion capture of the displayed emotion.

Indexing (document details)
Advisor: Marsella, Stacy
Commitee: Itti, Laurent, Rizzo, Albert, Tjan, Bosco
School: University of Southern California
Department: Computer Science
School Location: United States -- California
Source: DAI-B 69/12, Dissertation Abstracts International
Subjects: Computer science
Keywords: Agents, Animation, Emotions, Gaze shifts, Procedural, Virtual embodied characters
Publication Number: 3341714
ISBN: 978-0-549-97350-8
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy