Published January 22, 2025 | Version v1
Conference paper Open

DUNE: Distributed Inference in the User Plane

  • 1. ROR icon IMDEA Networks
  • 2. NEC Laboratories Europe

Description

The deployment of Machine Learning (ML) models in the user plane enables line-rate in-network inference, significantly reducing latency and improving the scalability of functions like traffic monitoring. Yet, integrating ML models into programmable network devices requires meeting stringent constraints in terms of memory resources and computing capabilities. Previous solutions have focused on implementing monolithic ML models within individual programmable network devices, which are limited by hardware constraints, especially while executing challenging classification use cases. In this paper, we propose DUNE, a novel framework that realizes for the first time a user plane inference that is distributed across the multiple devices that compose the programmable network. DUNE adopts fully automated approaches to (i) breaking large ML models into simpler sub-models that preserve inference accuracy while minimizing resource usage, (ii) designing the sub-models and their sequencing so as to enable an efficient distributed execution of joint packet- and flow-level inference. We implement DUNE using P4, deploy it in an experimental network with multiple industry-grade programmable switches, and run tests with real-world traffic measurements for two complex classification use cases. Our results demonstrate that DUNE not only reduces per-switch resource utilization with respect to legacy monolithic ML designs but also improves their inference accuracy by up to 7.5%.

Files

DUNE_INFOCOM25_DSpace.pdf

Files (1.2 MB)

Name Size Download all
md5:1b276eff8471ae69f21ae560a70fcae0
1.2 MB Preview Download

Additional details

Funding

European Commission
ORIGAMI – Optimized resource integration and global architecture for mobile infrastructure for 6G 101139270