Deep Reinforcement Learning for Minimizing AgeofInformation in UAVassisted Networks
Abstract
Unmanned aerial vehicles (UAVs) are expected to be a key component of the nextgeneration wireless systems. Due to their deployment flexibility, UAVs are being considered as an efficient solution for collecting information data from ground nodes and transmitting it wirelessly to the network. In this paper, a UAVassisted wireless network is studied, in which energyconstrained ground nodes are deployed to observe different physical processes. In this network, a UAV that has a time constraint for its operation due to its limited battery, moves towards the ground nodes to receive status update packets about their observed processes. The flight trajectory of the UAV and scheduling of status update packets are jointly optimized with the objective of achieving the minimum weighted sum for the ageofinformation (AoI) values of different processes at the UAV, referred to as weighted sumAoI. The problem is modeled as a finitehorizon Markov decision process (MDP) with finite state and action spaces. Since the state space is extremely large, a deep reinforcement learning (RL) algorithm is proposed to obtain the optimal policy that minimizes the weighted sumAoI, referred to as the ageoptimal policy. Several simulation scenarios are considered to showcase the convergence of the proposed deep RL algorithm. Moreover, the results also demonstrate that the proposed deep RL approach can significantly improve the achievable sumAoI per process compared to the baseline policies, such as the distancebased and random walk policies. The impact of various system design parameters on the optimal achievable sumAoI per process is also shown through extensive simulations.
 Publication:

arXiv eprints
 Pub Date:
 May 2019
 arXiv:
 arXiv:1905.02993
 Bibcode:
 2019arXiv190502993A
 Keywords:

 Computer Science  Information Theory;
 Computer Science  Networking and Internet Architecture
 EPrint:
 This paper will be presented in IEEE Globecom, Waikoloa, HI, Dec. 2019