Historically, most virtual human character research focuses on realism/emotions, interaction with humans, and discourse. The majority of the spatial positioning of characters has focused on one-on-one conversations with humans or placing virtual characters side-by-side when talking. These rely on conversation space as the main driver (if any) for character placement.
Movies and games rely on motion capture (mocap) files and hard-coded functions to perform spatial movements. These require extensive technical knowledge just to have a character move from one place to another. Other methods involve the use of Behavior Markup Language (BML), a form of XML, which describes character behaviors. BML Realizers take this BML and perform the requested behavior(s) on the character(s). Also, there are waypoint and other spatial navigation schemes, but they primarily focus on traversals and not correct positioning. Each of these require a fair amount of low-level detail and knowledge to write, plus BML realizers are still in their early stages of development.
Theatre, movies, and television all utilize a form of play-scripts, which provide detailed information on what the actor must do spatially, and when for a particular scene (that is spatio-temporal direction). These involve annotations, in addition to the speech, which identify scene setups, character movements, and entrances /exits. Humans have the ability to take these play-scripts and easily perform a believable scene.
This research focuses on utilizing play-scripts to provide spatio-temporal direction to virtual characters within a scene. Because of the simplicity of creating a playscript, and our algorithms to interpret the scripts, we are able to provide a quick method of blocking scenes with virtual characters.
We focus on not only an all-virtual cast of characters, but also human-controlled characters intermixing with the virtual characters for the scene. The key here is that human-controlled characters introduce a dynamic spatial component that affects how the virtual characters should perform the scene to ensure continuity, cohesion, and inclusion with the human-controlled character.
The algorithms to accomplish the blocking of a scene from a standard play-script are the core research contribution. These techniques include some part of speech tagging, named entity recognition, a rules engine, and strategically designed force-directed graphs. With these methods, we are able to similarly map any play-script’s spatial positioning of characters to a human-performed version of the same playscript. Also, human-based evaluations indicate these methods provide a qualitatively good performance.
Potential applications include: a rehearsal tool for actors; a director tool to help create a play-script; a controller for virtual human characters in games or virtual environments; or a planning tool for positioning people in an industrial environment.
|Advisor:||Youngblood, G. Michael|
|Commitee:||Hartley, Andrew, Souvenir, Richard, Subramanian, Kalpathi, Xiao, Jing|
|School:||The University of North Carolina at Charlotte|
|School Location:||United States -- North Carolina|
|Source:||DAI-B 79/09(E), Dissertation Abstracts International|
|Subjects:||Artificial intelligence, Computer science|
|Keywords:||Force-directed graphs, Natural language processing, Play-scripts, Spatio-temporal reasoning, Virtual environments|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be