Abstract
Reinforcement learning is a prominent machine learning technique used to optimize an agent's performance in potentially unknown environments.
Despite its popularity and success, reinforcement learning lacks safety guarantees, both during the learning phase and deployment.
This paper reviews a runtime enforcement method called shielding that ensures provable safety for reinforcement learning.
We describe the underlying models, the types of guarantees that can be delivered, and the process of computing shields. Furthermore, we describe several techniques for integrating shields into reinforcement learning, discuss the advantages and potential drawbacks of this integration, and highlight the current challenges in shielded learning.
Despite its popularity and success, reinforcement learning lacks safety guarantees, both during the learning phase and deployment.
This paper reviews a runtime enforcement method called shielding that ensures provable safety for reinforcement learning.
We describe the underlying models, the types of guarantees that can be delivered, and the process of computing shields. Furthermore, we describe several techniques for integrating shields into reinforcement learning, discuss the advantages and potential drawbacks of this integration, and highlight the current challenges in shielded learning.
Original language | English |
---|---|
Journal | Communications of the ACM |
Publication status | Accepted/In press - 2025 |