Reinforcement Learning for Delay-Constrained Energy-Aware Small Cells with Multi-Sleeping Control
Creators
- 1. Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)
- 2. Orange Labs
- 3. IMT Atlantique, IRISA, UMR CNRS
Description
In 5G networks, specific requirements are defined on the periodicity of Synchronization Signaling (SS) bursts. This imposes a constraint on the maximum period a Base Station (BS) can be deactivated. On the other hand, BS densification is expected in 5G architecture. This will cause a drastic increase in the network energy consumption followed by a complex interference management. In this paper, we study the Energy-Delay-Tradeoff (EDT) problem in a Heterogeneous Network (HetNet) where small cells can switch to different sleep mode levels to save energy while maintaining a good Quality of Service (QoS). We propose a distributed Q-learning algorithm controller for small cells that adapts the cell activity while taking into account the co-channel interference between the cells. Our numerical results show that multi-level sleep scheme outperforms binary sleep scheme with an energy saving up to 80% in the case when the users are delay tolerant, and while respecting the periodicity of the SS bursts in 5G.
Notes
Files
Reinforcement Learning for Delay.pdf
Files
(309.0 kB)
Name | Size | Download all |
---|---|---|
md5:d164844a65cede5b82f3e27cfd64e11c
|
309.0 kB | Preview Download |