TinyKubeML: Orchestrating TinyML Models on Far-Edge Clusters
Authors/Creators
Description
The Internet of Things (IoT) is rapidly materializing, but the growing volume of data generated by Far-Edge devices, often microcontroller-based, poses challenges for cloud-centric processing. TinyML addresses this challenge by enabling on-device ML inference, thereby reducing communication latency and cost. However, current solu-
tions largely overlook deployment and management challenges, especially in heterogeneous, resource-constrained environments. This paper introduces TinyKubeML, a Kubernetes-based framework that enables resource-aware deployment of TinyML models on Far-Edge clusters. It abstracts device heterogeneity and automates model
partitioning, artifact generation, and deployment using a custom Kubernetes Operator. TinyKubeML supports distributed inference and includes recovery mechanisms to ensure service continuity. Our evaluation shows that TinyKubeML can deploy distributed models efficiently with minimal impact on accuracy, while supporting automatic recovery in the case of device failures, demonstrating its potential to bridge the gap between scalable orchestration and TinyML deployment in IoT scenarios.
Files
TinyKubeML-final-CR-v2.pdf
Files
(1.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:7d0a328c4f83dcc35b3cfc0f613649c9
|
1.4 MB | Preview Download |