Publications

Detailed Information

Style-Agnostic Reinforcement Learning

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

Lee, Juyong; Ahn, Seokjun; Park, Jaesik

Issue Date
2022
Publisher
Springer Verlag
Citation
Lecture Notes in Computer Science, Vol.13699, pp.604-620
Abstract
We present a novel method of learning style-agnostic representation using both style transfer and adversarial learning in the reinforcement learning framework. The style, here, refers to task-irrelevant details such as the color of the background in the images, where generalizing the learned policy across environments with different styles is still a challenge. Focusing on learning style-agnostic representations, our method trains the actor with diverse image styles generated from an inherent adversarial style perturbation generator, which plays a min-max game between the actor and the generator, without demanding expert knowledge for data augmentation or additional class labels for adversarial training. We verify that our method achieves competitive or better performances than the state-of-the-art approaches on Procgen and Distracting Control Suite benchmarks, and further investigate the features extracted from our model, showing that the model better captures the invariants and is less distracted by the shifted style. The code is available at https://github.com/POSTECH-CVLab/style-agnostic-RL.
ISSN
0302-9743
URI
https://hdl.handle.net/10371/201289
DOI
https://doi.org/10.1007/978-3-031-19842-7_35
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Dept. of Computer Science and Engineering
Research Area Computer Graphics, Computer Vision, Machine Learning

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share