Published August 27, 2025 | Version v1
Conference paper Open

MARC-6G: Multi-Agent Reinforcement Learning for Distributed Context-Aware SFC Deployment and Migration in 6G Networks

  • 1. ROR icon University of Bern
  • 2. University of Luxembourg

Description

The Cloud Continuum Framework (CCF) extends computing capabilities across near-edge, far-edge, and extreme-edge nodes beyond the traditional edge to meet the diverse performance demands of emerging 6G applications. While Deep Reinforcement Learning (DRL) has demonstrated potential in automating Virtual Network Function (VNF) migration by learn- ing optimal policies, centralized DRL-based orchestration faces challenges related to scalability and limited visibility in distributed, heterogeneous network environments. To address these limitations, we introduce MARC-6G (Multi-Agent Reinforcement Learning for Distributed Context-Aware Service Function Chain (SFC) Deployment and Migration in 6G Networks), a novel framework that leverages decentralized agents for distributed, dynamic, and service-aware SFC placement and migration. MARC-6G allows agents to monitor different portions of the network, collaboratively optimize network control policies via experience sharing, and make local decisions that collectively enhance global orchestration under time-varying traffic conditions. We show through simulations that MARC-6G improves SFC deployment efficiency, reduces migration costs by 34%, and lowers energy consumption by 12.5% compared to the state-of-the-art centralized DRL baseline.

Files

Manuscript_MAC-6G.pdf

Files (1.7 MB)

Name Size Download all
md5:805410db15e4e035c64b9c144f6140cd
1.7 MB Preview Download

Additional details

Related works

Is identical to
Conference paper: 10.48620/91253 (DOI)

Dates

Accepted
2025-08-27