Jewel: Resource-Efficient Joint Packet and Flow Level Inference in Programmable Switches
- 1. IMDEA Networks
Description
Embedding machine learning (ML) models in programmable switches realizes the vision of high-throughput and low-latency inference at line rate. Recent works have made breakthroughs in embedding Random Forest (RF) models in switches for either packet-level inference or flow-level inference. The former relies on simple features from packet headers that are simple to implement but limit accuracy in challenging use cases; the latter exploits richer flow features to improve accuracy, but leaves early packets in each flow unclassified. We propose Jewel, an in-switch ML model based on a fully joint packetand flow-level design, which takes the best of both worlds by
classifying early flow packets individually and shifting to flowlevel inference when possible. Our proposal involves (i) a single RF model trained to classify both packets and flows, and (ii) hardware-aware model selection and training techniques for resource footprint minimization. We implement Jewel in P4 and deploy it in a testbed with Intel Tofino switches, where we run extensive experiments with a variety of real-world use cases. Results reveal how our solution outperforms four state-of-the-art benchmarks, with accuracy gains in the 2.0%–5.3% range.
Files
JEWEL_INFOCOM24.pdf
Files
(2.0 MB)
Name | Size | Download all |
---|---|---|
md5:ae1dcc59e4e23ed3fea8ac2102199f84
|
2.0 MB | Preview Download |