The game of Snake has been selected to provide a unique application of the TD(λ) algorithm as proposed by Sutton. A reinforcement learning technique for producing computer controlled players is documented. Using value function approximation with multilayer artificial neural networks and the actor-critic architecture, computer players capable of playing the game of Snake can be created. The adaptation to the standard neural network backpropagation procedure will be documented. Not only does the proposed technique provide reasonable player performance, its application is unique; this approach to Snake has never been documented. By performing sets of trials, the performance of the players are evaluated and compared against an existing machine learning technique. Learning curves provide visualization for the results. Though the snake players are shown to be capable of achieving lower scores than with the existing method, the technique is able to produce agents that accumulate scores, much more efficiently.
|School:||University of Louisville|
|School Location:||United States -- Kentucky|
|Source:||MAI 48/06M, Masters Abstracts International|
|Subjects:||Computer Engineering, Computer science|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be