Publications

Detailed Information

Distributional Deep Reinforcement Learning with a Mixture of Gaussians

Cited 10 time in Web of Science Cited 15 time in Scopus
Authors

Choi, Yunho; Lee, Kyungjae; Oh, Songhwai

Issue Date
2019-05
Publisher
IEEE
Citation
2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), pp.9791-9797
Abstract
In this paper, we propose a novel distributional reinforcement learning (RL) method which models the distribution of the sum of rewards using a mixture density network. Recently, it has been shown that modeling the randomness of the return distribution leads to better performance in Atari games and control tasks. Despite the success of the prior work, it has limitations which come from the use of a discrete distribution. First, it needs a projection step and softmax parametrization for the distribution, since it minimizes the KL divergence loss. Secondly, its performance depends on discretization hyperparameters such as the number of atoms and bounds of the support which require domain knowledge. We mitigate these problems with the proposed parameterization, a mixture of Gaussians. Furthermore, we propose a new distance metric called the Jensen-Tsallis distance, which allows the computation of the distance between two mixtures of Gaussians in a closed form. We have conducted various experiments to validate the proposed method, including Atari games and autonomous vehicle driving.
ISSN
1050-4729
URI
https://hdl.handle.net/10371/186753
DOI
https://doi.org/10.1109/ICRA.2019.8793505
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share