Publications

Detailed Information

EMI: Exploration with mutual information

Cited 0 time in Web of Science Cited 10 time in Scopus
Authors

Kim, Hyoungseok; Kim, Jaekyeom; Jeong, Yeonwoo; Levine, Sergey; Song, Hyun Oh

Issue Date
2019-06
Publisher
International Machine Learning Society (IMLS)
Citation
36th International Conference on Machine Learning, ICML 2019, Vol.2019-June, pp.5837-5851
Abstract
Copyright © 2019 ASMEReinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI.
URI
https://hdl.handle.net/10371/179329
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share