Model inference over 5G networks for robotics
Authors/Creators
- 1. Instituto Tecnológico de Informática
Description
This work studies compute offloading for mobile-robotics perception by abstracting CNN inference from the robot to edge and remote servers within a ROS 2 pipeline. The OMZ person-detection-0200 model is executed using OpenVINO, with video streamed via GStreamer over a private 5G network. Three deployments are compared: on-board inference, edge inference, and remote server inference. Average inference time decreases from 5.177 ms (on-board) to 4.998 ms (edge) and 1.291 ms (remote), while recommended throughput increases from 35.088 FPS to 41.858 FPS and 83.612 FPS. Results confirm that compute placement strongly impacts both latency and achievable frame rate, and that tail behaviour must be considered alongside central tendency when selecting an execution locus.
Files
Model inference over 5G networks for robotics.pdf
Files
(759.2 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:bb8e2ac30a3d864d5f74ac216be5936d
|
759.2 kB | Preview Download |