Deep recurrent networks can build complex representations of sequential data, enabling them to do things such as translate text from one language into another. These networks often utilize an attention mechanism that allows them to focus on important input representations. Network attention can be used to provide insight into network strategies for identifying structure in the input sequence. This study investigates attention in a deep recurrent network, trained to annotate text, to determine how the distribution of information in the text affects learning and network performance. Results suggest that reversing the input text makes it difficult for the network to discover higher-order linguistic structure. This study contributes to an understanding of how our brains might process sequential data.
|Advisor:||Wright, Cameron H.G., Barrett, Steve|
|Commitee:||Prather, Jon, Sommer, Michael|
|School:||University of Wyoming|
|School Location:||United States -- Wyoming|
|Source:||MAI 57/01M(E), Masters Abstracts International|
|Subjects:||Information science, Artificial intelligence, Computer science|
|Keywords:||Artificial neural networks, Deep learning, Information entropy, Recurrent networks|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be