A Reliability-aware Multi-armed Bandit Approach to Learn and Select Users in Demand Response
Abstract
One challenge in the optimization and control of societal systems is to handle the unknown and uncertain user behavior. This paper focuses on residential demand response (DR) and proposes a closed-loop learning scheme to address these issues. In particular, we consider DR programs where an aggregator calls upon residential users to change their demand so that the total load adjustment is close to a target value. To learn and select the right users, we formulate the DR problem as a combinatorial multi-armed bandit (CMAB) problem with a reliability objective. We propose a learning algorithm: CUCB-Avg (Combinatorial Upper Confidence Bound-Average), which utilizes both upper confidence bounds and sample averages to balance the tradeoff between exploration (learning) and exploitation (selecting). We consider both a fixed time-invariant target and time-varying targets, and show that CUCB-Avg achieves $O(\log T)$ and $O(\sqrt{T \log(T)})$ regrets respectively. Finally, we numerically test our algorithms using synthetic and real data, and demonstrate that our CUCB-Avg performs significantly better than the classic CUCB and also better than Thompson Sampling.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2020
- DOI:
- arXiv:
- arXiv:2003.09505
- Bibcode:
- 2020arXiv200309505L
- Keywords:
-
- Electrical Engineering and Systems Science - Systems and Control