The Cost of Learning: Efficiency vs. Efficacy of Learning-Based RRM for 6G
Description
In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks. In many scenarios, the learning task is performed in the Cloud, while experience samples are generated directly by edge nodes or users. Therefore, the learning task involves some data exchange which, in turn, subtracts a certain amount of transmission resources from the system. This creates a friction between the need to speed up convergence towards an effective strategy, which requires the allocation of resources to transmit learning samples, and the need to maximize the amount of resources used for data plane communication, maximizing users' Quality of Service (QoS), which requires the learning process to be efficient, i.e., minimize its overhead. In this paper, we investigate this trade-off and propose a dynamic balancing strategy between the learning and data planes, which allows the centralized learning agent to quickly converge to an efficient resource allocation strategy while minimizing the impact on QoS. Simulation results show that the proposed method outperforms static allocation methods, converging to the optimal policy (i.e., maximum efficacy and minimum overhead of the learning plane) in the long run.
Files
m67408-lahmer paper.pdf
Files
(749.2 kB)
Name | Size | Download all |
---|---|---|
md5:45fb8664f0b1610bb31c9d3ea90bed94
|
749.2 kB | Preview Download |
Additional details
Related works
- Is identical to
- Conference paper: 10.48550/arXiv.2211.16915 (DOI)