Entropy-Based Experience Replay in Reinforcement Learning
نام عام مواد
[Thesis]
نام نخستين پديدآور
Dadvar, Mehdi
نام ساير پديدآوران
Doerschuk, Peggy Israel
وضعیت نشر و پخش و غیره
نام ناشر، پخش کننده و غيره
Lamar University - Beaumont
تاریخ نشرو بخش و غیره
2020
يادداشت کلی
متن يادداشت
77 p.
یادداشتهای مربوط به پایان نامه ها
جزئيات پايان نامه و نوع درجه آن
M.Sc.
کسي که مدرک را اعطا کرده
Lamar University - Beaumont
امتياز متن
2020
یادداشتهای مربوط به خلاصه یا چکیده
متن يادداشت
Reinforcement learning mediates supervised and unsupervised learning to adequately solve learning problems where no prior experience is available to train an agent. The agent learns from its own experience by trying different actions in different states of its environment and rewarding those actions that improve its performance. Experience replay is a technique developed to utilize the learning agent's experience for further enhancement in an online manner. Experience replay lets the online reinforcement learning agent remember and reuse its experiences from the past. Training the agent iteratively on small batches of examples from the experience replay buffer makes the learning faster and more accurate. However, the way that the batching is implemented plays a significant role in the quality and speed of the learning process. Prior works in this field neglect the diversity of the samples chosen for batching purposes: the experience histories are randomly sampled from a replay buffer. This approach does not guarantee the diversity of experiences and is susceptible to choosing very similar or even redundant experiences for inclusion in a batch. In this research, a framework, called entropy-based experience replay (EER), for increasing information gain of the batched experience is developed by maximizing the entropy among the states of a minibatch. Having said that, each minibatch introduces diverse experiences to the agent's neural networks, from which a faster and more stable learning is expected. The entropy-based experience replay is applied in Deep Q-Networks (DQN) in the Gym environment of OpenAI, where the effect of the presented method is investigated and tested on a classic control model of the Gym environment. As extensive simulation results demonstrate, perceptible increases in the entropies of sampled experiences fulfilled by the EER method resulted in significant increase in the learning agent's performance.
اصطلاحهای موضوعی کنترل نشده
اصطلاح موضوعی
Artificial intelligence
اصطلاح موضوعی
Computer science
نام شخص به منزله سر شناسه - (مسئولیت معنوی درجه اول )