Energy-Efficient Task Offloading in Edge Computing: A Survey of Deep Reinforcement Learning Approaches
Authors/Creators
Description
In edge computing, task offloading involves transferring computational tasks from the “far-edge”, which includes end-user devices or less powerful edge devices, to the “near-edge”, comprising more capable edge servers, or to the ’core’ cloud infrastructure. This practice optimizes performance, reduces latency, and enhances overall efficiency. Energy efficiency in particular has recently become a high-priority criterion for task offloading. A prominent technique for making offloading decisions in edge computing environments is Deep Reinforcement Learning (DRL), known for its ability to adapt to complex environments and excel in multi-objective optimization tasks in terms of decision quality and speed. This paper explores the details of DRL approaches, providing an overview of recent research developments in this field. To simplify the literature analysis, we classify DRL approaches for energy-efficient task offloading between two “computing continua”: the far/near-edge continuum, and the (far-)edge-cloud contiuum.
Files
Energy_Efficient_DLR_Cloud_Edge_GECON24.pdf
Files
(270.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:a76b7b7d1414eeb5d2fc156478b20ce2
|
270.4 kB | Preview Download |