Safe Reinforcement Learning via Observation Shielding

dc.contributor.authorMccalmon, Joe
dc.contributor.authorLiu, Tongtong
dc.contributor.authorGoldsmith, Reid
dc.contributor.authorCyhaniuk, Andrew
dc.contributor.authorHalabi, Talal
dc.contributor.authorAlqahtani, Sarra
dc.date.accessioned2022-12-27T19:22:57Z
dc.date.available2022-12-27T19:22:57Z
dc.date.issued2023-01-03
dc.description.abstractReinforcement Learning (RL) algorithms have shown success in scaling up to large problems. However, deploying those algorithms in real-world applications remains challenging due to their vulnerability to adversarial perturbations. Existing RL robustness methods against adversarial attacks are weak to large perturbations - a scenario that cannot be ruled out for RL adversarial threats, as is the case for deep neural networks in classification tasks. This paper proposes a method called observation-shielding RL (OSRL) to increase the robustness of RL against large perturbations using predictive models and threat detection. Instead of changing the RL algorithms with robustness regularization or retrain them with adversarial perturbations, we depart considerably from previous approaches and develop an add-on safety feature for existing RL algorithms during runtime. OSRL builds on the idea of model predictive shielding, where an observation predictive model is used to override the perturbed observations as needed to ensure safety. Extensive experiments on various MuJoCo environments (Ant, Hooper) and the classical pendulum environment demonstrate that our proposed OSRL is safer and more efficient than state-of-the-art robustness methods under large perturbations.
dc.format.extent10
dc.identifier.doi10.24251/HICSS.2023.799
dc.identifier.isbn978-0-9981331-6-4
dc.identifier.other63533de5-8b7f-47c4-ac6c-c2d3d4dde1ab
dc.identifier.urihttps://hdl.handle.net/10125/103433
dc.language.isoeng
dc.relation.ispartofProceedings of the 56th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectCyber Operations, Defense, and Forensics
dc.subjectadversarial examples
dc.subjectreinforcement learning
dc.subjectrobustness
dc.subjectsafety
dc.subjectshielding
dc.titleSafe Reinforcement Learning via Observation Shielding
dc.type.dcmitext
prism.startingpage6603

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0643.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format