Safe reinforcement learning for multi-energy management systems with known constraint functions
Abstract
Reinforcement learning (RL) is a promising optimal control technique for multi-energy management systems. It does not require a model a priori - reducing the upfront and ongoing project-specific engineering effort and is capable of learning better representations of the underlying system dynamics. However, vanilla RL does not provide constraint satisfaction guarantees - resulting in various potentially unsafe interactions within its environment. In this paper, we present two novel online model-free safe RL methods, namely SafeFallback and GiveSafe, where the safety constraint formulation is decoupled from the RL formulation. These provide hard-constraint satisfaction guarantees both during training and deployment of the (near) optimal policy. This is without the need of solving a mathematical program, resulting in less computational power requirements and more flexible constraint function formulations. In a simulated multi-energy systems case study we have shown that both methods start with a significantly higher utility compared to a vanilla RL benchmark and Optlayer benchmark (94,6% and 82,8% compared to 35,5% and 77,8%) and that the proposed SafeFallback method even can outperform the vanilla RL benchmark (102,9% to 100%). We conclude that both methods are viably safety constraint handling techniques applicable beyond RL, as demonstrated with random policies while still providing hard-constraint guarantees.
- Publication:
-
Energy and AI
- Pub Date:
- April 2023
- DOI:
- 10.1016/j.egyai.2022.100227
- arXiv:
- arXiv:2207.03830
- Bibcode:
- 2023EneAI..1200227C
- Keywords:
-
- Reinforcement learning;
- Constraints;
- Multi-energy systems;
- Energy management system;
- Electrical Engineering and Systems Science - Systems and Control;
- Computer Science - Artificial Intelligence;
- Computer Science - Machine Learning;
- Mathematics - Optimization and Control
- E-Print:
- 26 pages, 14 figures