Dissertation/Thesis Abstract

Novel Function Approximation Techniques for Large-scale Reinforcement Learning
by Wu, Cheng, Ph.D., Northeastern University, 2010, 130; 3443846
Abstract (Summary)

Function approximation can be used to improve the performance of reinforcement learners. Traditional techniques, including Tile Coding and Kanerva Coding, can give poor performance when applied to large-scale problems. In our preliminary work, we show that this poor performance is caused by prototype collisions and uneven prototype visit frequency distributions. We describe our adaptive Kanerva-based function approximation algorithm, based on dynamic prototype allocation and adaptation. We show that probabilistic proto-type deletion with prototype splitting can make the distribution of visit frequencies more uniform, and that dynamic prototype allocation and adaptation can reduce prototoype collsisions. This approach can significantly improve the performance of a reinforcement learner.

We then show that fuzzy Kanerva-based function approximation can reduce the similarity between the membership vectors of state-action pairs, giving even better results. We use Maximum Likelihood Estimation to adjust the variances of basis functions and tune the receptive fields of prototypes. This approach completely eliminates prototype collisions, and greatly improve the ability of a Kanerva-based reinforcement learner to solve large-scale problems.

Since the number of prototypes remains hard to select, we describe a more effective approach for adaptively selecting the number of prototypes. Our new rough sets-based Kanerva-based function approximation uses rough sets theory to explain how prototype collisions occur. Our algorithm eliminates unnecessary prototypes by replacing the original prototype set with its reduct, and reduces prototype collisions by splitting equivalence classes with two or more state-action pairs. The approach can adaptively select an effective number of prototypes and greatly improve a Kanerva-based reinforcement learners ability.

Finally, we apply function approximation techniques to scale up the ability of reinforcement learners to solve a real-world application: spectrum management in cognitive radio networks. We use multi-agent reinforcement learning approach with decentralized control can be used to select transmission parameters and enable efficient assignment of spectrum and transmit powers. However, the requirement of RL-based approaches that an estimated value be stored for every state greatly limits the size and complexity of CR networks that can be solved. We show that function approximation can reduce the memory used for large networks with little loss of performance. We conclude that our spectrum management approach based on reinforcement learning with Kanerva-based function approximation can significantly reduce interference to licensed users, while maintaining a high probability of successful transmissions in a cognitive radio ad hoc network.

Indexing (document details)
Advisor: Meleis, Waleed
Commitee: Aslam, Javed A., Dy, Jennifer
School: Northeastern University
Department: Electrical and Computer Engineering
School Location: United States -- Massachusetts
Source: DAI-B 72/04, Dissertation Abstracts International
Subjects: Computer Engineering
Keywords: Cognitive radio network, Function approximation, Fuzzy logic, Reinforcement learning, Rough set
Publication Number: 3443846
ISBN: 9781124497365
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy