Publications

Detailed Information

On Improving the Robustness of Reinforcement Learning-based Controllers using Disturbance Observer

Cited 7 time in Web of Science Cited 8 time in Scopus
Authors

Kim, Jeong Woo; Shim, Hyungbo; Yang, Insoon

Issue Date
2019-12
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings of the IEEE Conference on Decision and Control, Vol.2019-December, pp.847-852
Abstract
© 2019 IEEE.Because reinforcement learning (RL) may cause issues in stability and safety when directly applied to physical systems, a simulator is often used to learn a control policy. However, the control performance may be easily deteriorated in a real plant due to the discrepancy between the simulator and the plant. In this paper, we propose an idea to enhance the robustness of such RL-based controllers by utilizing the disturbance observer (DOB). This method compensates for the mismatch between the plant and simulator, and rejects disturbance to maintain the nominal performance while guaranteeing robust stability. Furthermore, the proposed approach can be applied to partially observable systems. We also characterize conditions under which the learned controller has a provable performance bound when connected to the physical system.
ISSN
0191-2216
URI
https://hdl.handle.net/10371/198095
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share