Dissertation/Thesis Abstract

Utilizing Various Neural Network Architectures to Play a Game Developed for Human Players
by Arender, Blake, M.S., The University of Mississippi, 2018, 51; 10752390
Abstract (Summary)

Neural Networks have received an explosive amount of attention and interest in recent years. Despite the fact that Neural Network algorithms having existed for many decades, it was not until recent advances in computer hardware that they saw widespread use. This is in no small part due to the success these algorithms have had in tasks ranging from image classification, voice recognition, game theory, and many other applications. Thanks to recent strides in hardware development, most importantly in the advancements in Graphics Processor Units including the capabilities of modern GPU Computing, Neural Networks are now capable of solving tasks at a much higher success rate than other machine learning algorithms [3]. With the success of such Artificial Intelligence agents like DeepMind's AlphaGo and its stronger descendant AlphaGo Zero, the capabilities of Deep Neural Nets (DNNs) have received unprecedented mainstream coverage showing people across the world that these algorithms are capable of Superhuman level of play. After learning all of this I wanted to find a way to apply Artificial Intelligence to a game that was quite difficult for Human players: Cuphead. Cuphead is a classic styled game with terrific level design and music. It is a 2d Run N' Gun shooter also featuring continuous boss fights. The unique nature of the game provides clearly defined win and loss conditions for each level making this game a perfect environment for testing various Artificially Intelligent agents. For my Thesis, I used a Deep Convolutional Neural Network (CNN) to develop an agent capable of playing the first Run N' Gun level of the game. I implemented this neural network using keras and supervised learning. In this thesis, I will outline the exact process I used to train this agent. I will also explain the research I have done into how a Reinforcement Learning agent would be implemented for this game as the Supervised Learning agent is currently only capable of playing the level I trained it on whereas I believe a sufficiently developed Reinforcement Learning agent could learn to play almost any level in the game.

Indexing (document details)
Advisor: Hassan, Naeemul
Commitee: Chen, Yixin, Wilkins, Dawn
School: The University of Mississippi
Department: Engineering Science
School Location: United States -- Mississippi
Source: MAI 58/04M(E), Masters Abstracts International
Source Type: DISSERTATION
Subjects: Artificial intelligence, Computer science
Keywords: Artificial intelligence, Game theory, Reinforcement learning, Supervised learning
Publication Number: 10752390
ISBN: 9781392002407
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest