Date on Master's Thesis/Doctoral Dissertation
5-2010
Document Type
Master's Thesis
Degree Name
M. Eng.
Department
Computer Engineering and Computer Science
Committee Chair
Desoky, Ahmed H.
Subject
Computer games--Design; Evaluation--Data processing
Abstract
The game of Snake has been selected to provide a unique application of the TD( ) algorithm as proposed by Sutton. A reinforcement learning technique for producing computer controlled players is documented. Using value function approximation with multilayer artificial neural networks and the actor-critic architecture, computer players capable of playing the game of Snake can be created. The adaptation to the standard neural network backpropagation procedure will be documented. Not only does the proposed technique provide reasonable player performance, its application is unique; this approach to Snake has never been documented. By performing sets of trials, the performance of the players are evaluated and compared against an existing machine learning technique. Learning curves provide visualization for the results. Though the snake players are shown to be capable of achieving lower scores than with the existing method, the technique is able to produce agents that accumulate scores, much more efficiently.
Recommended Citation
Lockhart, Christopher, "Application of temporal difference learning to the game of Snake." (2010). Electronic Theses and Dissertations. Paper 848.
https://doi.org/10.18297/etd/848