Asking questions is a hallmark of human cognition which enables us to actively learn about, navigate in, and adapt to the world we inhabit. Questions drive scientific discoveries and clarify ambiguous situations. But, do people ask good questions? How can we measure question quality? What are the computational building blocks of question asking? And what would it take to build a model that can generate human-like questions?
Chapter 2 addresses the first two questions. In order to measure the quality of questions I develop a Bayesian ideal-observer model that evaluates question quality based on the information-theoretic measure Expected Information Gain. The model evaluates natural- language questions that people asked in a game-based task to resolve their uncertainty about an ambiguous game state. People asked a variety of useful and goal-directed questions, yet they rarely asked the best questions. However, people strongly preferred the best questions when choosing from a list of provided questions, suggesting a bottleneck in hu- man question generation but not in question evaluation.
Chapter 3 addresses the next two questions. I present a probabilistic model of question generation. Here, the key idea is to represent questions as short programs that, when executed on the state of the world, output an answer. The model defines a probability distribution over a complex, compositional space of question programs, favoring concise question programs that are useful to resolve ambiguous game states. These are questions that could have been asked by people or actually were asked by people playing the game in Chapter 2. Furthermore, I found that the model generalizes reasonably well to novel game contexts that it was not trained on.
In Chapter 4, I go further and extend task and model from Chapter 2 to address two important aspects of question asking that neural network approaches typically struggle with. The first aspect is to flexibly adjust one’s questioning to new goals. The second aspect is to learn from the answer to one’s question and ask good follow-up questions. I show that people’s sensitivity to both of these aspects are captured by my model.
Together, these chapters present comprehensive work towards a unified, computational account of question evaluation and question generation.
|Advisor:||Gureckis, Todd M.|
|Commitee:||Lake, Brenden, Barker, Chris, Bonawitz, Elizabeth, Ma, Weiji|
|School:||New York University|
|School Location:||United States -- New York|
|Source:||DAI-B 81/5(E), Dissertation Abstracts International|
|Subjects:||Cognitive psychology, Artificial intelligence, Computer science|
|Keywords:||Active learning, Bayesian modeling, Computational modeling, Information search, Question asking, Question generation|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be