In this work, we address the problem of understanding what may have happened in a goal-based deliberative agent's environment after the occurrence of exogenous actions and events. Such an agent observes, periodically, information about the state of the world, but this information is incomplete, and reasons for state changes are not observed. We propose methods a goal-based agent can use to construct internal, causal explanations of its observations based on a model of its environment. These explanations comprise a series of inferred actions and events that have occurred and continue to occur in its world, as well as assumptions about the initial state of the world. We show that an agent can more accurately predict future events and states by reference to these explanations, and thereby more reliably achieve its goals. This dissertation presents the following novel contributions: (1) a formalization of the problems of achieving goals, understanding what has happened, and updating an agent's model in a partially observable, dynamic world with partially known dynamics; (2) a complete agent (DHAGENT) that achieves goals in such environments more reliably than existing agents; (3) a novel algorithm (DISCOVERHISTORY) and technique (DISCOVER HISTORY search) for rapidly and accurately iteratively constructing causal explanations of what may have happened in these environments; (4) an examination of formal properties of these techniques; (5) a novel method (EML), capable of inferring improved models of an environment based on a small number of training scenarios; (6) experiments supporting performance claims about the novel methods described; and (7) an analysis of the efficiency of two DISCOVERHISTORY algorithm implementations.
|Commitee:||Aha, David W., Boicu, Mihai, Menasce, Daniel, de Jong, Kenneth|
|School:||George Mason University|
|School Location:||United States -- Virginia|
|Source:||DAI-B 79/03(E), Dissertation Abstracts International|
|Subjects:||Artificial intelligence, Computer science|
|Keywords:||Autonomous agents, Deliberative action, Explanation generation, Goal achievement, Learning environment models, Partially observable environments|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be