Entropy-Based Experience Replay in Reinforcement Learning
General Material Designation
[Thesis]
First Statement of Responsibility
Dadvar, Mehdi
Subsequent Statement of Responsibility
Doerschuk, Peggy Israel
.PUBLICATION, DISTRIBUTION, ETC
Name of Publisher, Distributor, etc.
Lamar University - Beaumont
Date of Publication, Distribution, etc.
2020
PHYSICAL DESCRIPTION
Specific Material Designation and Extent of Item
77
DISSERTATION (THESIS) NOTE
Dissertation or thesis details and type of degree
M.Sc.
Body granting the degree
Lamar University - Beaumont
Text preceding or following the note
2020
SUMMARY OR ABSTRACT
Text of Note
Reinforcement learning mediates supervised and unsupervised learning to adequately solve learning problems where no prior experience is available to train an agent. The agent learns from its own experience by trying different actions in different states of its environment and rewarding those actions that improve its performance. Experience replay is a technique developed to utilize the learning agent's experience for further enhancement in an online manner. Experience replay lets the online reinforcement learning agent remember and reuse its experiences from the past. Training the agent iteratively on small batches of examples from the experience replay buffer makes the learning faster and more accurate. However, the way that the batching is implemented plays a significant role in the quality and speed of the learning process. Prior works in this field neglect the diversity of the samples chosen for batching purposes: the experience histories are randomly sampled from a replay buffer. This approach does not guarantee the diversity of experiences and is susceptible to choosing very similar or even redundant experiences for inclusion in a batch. In this research, a framework, called entropy-based experience replay (EER), for increasing information gain of the batched experience is developed by maximizing the entropy among the states of a minibatch. Having said that, each minibatch introduces diverse experiences to the agent's neural networks, from which a faster and more stable learning is expected. The entropy-based experience replay is applied in Deep Q-Networks (DQN) in the Gym environment of OpenAI, where the effect of the presented method is investigated and tested on a classic control model of the Gym environment. As extensive simulation results demonstrate, perceptible increases in the entropies of sampled experiences fulfilled by the EER method resulted in significant increase in the learning agent's performance.