COMING SOON! PQDT Open is getting a new home!

ProQuest Open Access Dissertations & Theses will remain freely available as part of a new and enhanced search experience at

Questions? Please refer to this FAQ.

Dissertation/Thesis Abstract

Attention-Dependent and Continuous Representation Learning using Deep Autoencoders
by Dellana, Ryan, Ph.D., North Carolina Agricultural and Technical State University, 2019, 104; 22583727
Abstract (Summary)

The ability to learn low-dimensional representations of high-dimensional data is foundational to general cognitive functions such as comparison, abstraction, prediction, and planning. Additionally, systems that can learn in an unsupervised continuous fashion should be able to automatically adapt to changes in the target distribution and reach levels of accuracy beyond what can be achieved using static datasets. To achieve these capabilities, we propose two approaches: Attention-Dependent Autoencoders (ADA), and Autoencoder Leader-follower Clustering (ALC).

ADA produces vector embeddings of images, broadly analogous to the declarative memory engrams formed by the entorhinal-cortex hippocampal system. In order for these embedded representations to be useful within a cognitive architecture, they must be compact, their relative vector distances should reflect semantic distance, and the encoding process should allow modulation via attentional mechanisms. It must also be possible to decode the embeddings back to the input space, and critically, the reconstructions must preserve the semantics of the original data with respect to the cognitive system as a whole. To address these goals, we use “conservational loss” to train an autoencoder that generates reconstructions which conserve the activations of a single-class semantic segmenter, which we treat as a visual attention model. The resulting autoencoder preserves class-specific regions of images and can be modulated using the segmentation masks as attention vectors. The semantic embeddings produced by the encoder are shown to be amenable to distance metrics, and the reconstructions of the decoder shown to preserve the target-class better than a generic autoencoder, even appearing competitive with JPEG at lower bit-rates. We also suggest the use of autoencoder conservational loss as a post-2 deployment error metric for the attention-model and discuss the broader implications of conservational loss in general.

ALC addresses the catastrophic interference problem in continuous learning by using a gateless mixture-of-experts generated through autoencoder cloning, such that clones model different portions of the sample distribution in a shifting landscape built through a leader-follower clustering algorithm, with reconstruction error serving as the distance metric. To address scalability issues, ALC employs shared pseudo-rehearsal so the autoencoders of the ensemble can gradually consolidate their learned patterns into fewer autoencoders.

Indexing (document details)
Advisor: Anwar, Mohd
Commitee: Esterline, Albert, Xu, Jinsheng, Stephens, Joseph D, Severa, William M
School: North Carolina Agricultural and Technical State University
Department: Computer Science
School Location: United States -- North Carolina
Source: DAI-B 81/3(E), Dissertation Abstracts International
Subjects: Computer science, Cognitive psychology, Artificial intelligence
Keywords: Attention, Autoencoder, Continuous learning, Deep learning, Embeddings, Representation learning
Publication Number: 22583727
ISBN: 9781088322055
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy