Limited research exists concerning which machine learning algorithms are best suited to scenarios of strategic interest. Therefore, by comparing the performance of two popular algorithms (genetic and model-based reinforcement learning), this thesis demonstrates which algorithm performs best for particular strategic environments.
To maintain generality, performance is measured along four axes and varying environments. These axes are theoretical guaranteed reward, least iterations to achieve acceptable reward, highest limit of learning, and tolerance against heterogeneous opponent environments. Hence, the environments use 2-person games of varying levels of different opponent strategy. Measurements are obtained by comparing reward versus learning iterations while the algorithms are competing against statically designed opponents.
Experimental results indicate genetic learning outperforms model-based learning in total average payoff while model-based agents reach acceptable reward in less iterations and have better performance guarantees. Neither method seems more nor less affected by increasing opponent heterogeneity.
|School:||The University of Alabama in Huntsville|
|School Location:||United States -- Alabama|
|Source:||MAI 48/06M, Masters Abstracts International|
|Subjects:||Artificial intelligence, Computer science|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
supplemental files is subject to the ProQuest Terms and Conditions of use.