Distributed Edge Caching via Reinforcement Learning in Fog Radio Access Networks
Abstract
In this paper, the distributed edge caching problem in fog radio access networks (F-RANs) is investigated. By considering the unknown spatio-temporal content popularity and user preference, a user request model based on hidden Markov process is proposed to characterize the fluctuant spatio-temporal traffic demands in F-RANs. Then, the Q-learning method based on the reinforcement learning (RL) framework is put forth to seek the optimal caching policy in a distributed manner, which enables fog access points (F-APs) to learn and track the potential dynamic process without extra communications cost. Furthermore, we propose a more efficient Q-learning method with value function approximation (Q-VFA-learning) to reduce complexity and accelerate convergence. Simulation results show that the performance of our proposed method is superior to those of the traditional methods.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2019
- DOI:
- 10.48550/arXiv.1902.10574
- arXiv:
- arXiv:1902.10574
- Bibcode:
- 2019arXiv190210574L
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Networking and Internet Architecture;
- Statistics - Machine Learning
- E-Print:
- 6 pages, 6 figures, this work has been accepted by IEEE VTC 2019 Spring