There is a newer version of the record available.

Published July 12, 2022 | Version v1
Project deliverable Open

MARVEL - D3.2: Efficient deployment of AI-optimised ML/DL models – initial version

Description

The purpose of this deliverable is to describe the MARVEL Edge-to-Fog-Cloud framework as the deployment layer for the AI/DL MARVEL components. This framework incorporates the deployment logic that is hidden behind MARVdash, the proposed Kubernetes dashboard for instantiating services as orchestrated containers, and deploying them to desired execution sites based on optimisation strategy. The main goal of the optimisation strategy is for MARVEL components to be deployed into Kubernetes nodes based on their resource requirements and the resource offerings of the actual nodes. Moreover, this deliverable will describe methods for compressing machine learning algorithms/models based on the resources available at the edge (e.g., reducing the size and operation time of million-parameter deep learning models). Such compression could minimise the computational overhead on the edge servers.

Files

MARVEL-d3.2.pdf

Files (2.6 MB)

Name Size Download all
md5:a6c9d1789fcf27b68b30488f2e79bfdf
2.6 MB Preview Download

Additional details

Funding

European Commission
MARVEL – Multimodal Extreme Scale Data Analytics for Smart Cities Environments 957337