Switch-On/Off Policies for Energy Harvesting Small Cells through Distributed Q-Learning
- 1. Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)
- 2. DEI, University of Padova
Description
The massive deployment of small cells (SCs) represents one of the most promising solutions adopted by 5G cellular networks to meet the foreseen huge traffic demand. The high number of network elements entails a significant increase in the energy consumption. The usage of renewable energies for powering the small cells can help reduce the environmental impact of mobile networks in terms of energy consumption and also save on electric bills. In this paper, we consider a two-tier cellular network architecture where SCs can offload macro base stations and solely rely on energy harvesting and storage. In order to deal with the erratic nature of the energy arrival process, we exploit an ON/OFF switching algorithm, based on reinforcement learning, that autonomously learns energy income and traffic demand patterns. The algorithm is based on distributed multiagent Q-learning for jointly optimizing the system performance and the self-sustainability of the SCs. We analyze the algorithm by assessing its convergence time, characterizing the obtained ON/OFF policies, and evaluating an offline trained variant.
Simulation results demonstrate that our solution is able to increase the energy efficiency of the system with respect to simpler approaches. Moreover, the proposed method provides an harvested energy surplus, which can be used by mobile operators to offer ancillary services to the smart electricity grid.
Notes
Files
Switch-On_Off Policies for Energy Harvesting.pdf
Files
(149.0 kB)
Name | Size | Download all |
---|---|---|
md5:fdfcfe171a50d4e989b0b32b3ba7b21a
|
149.0 kB | Preview Download |