Published July 14, 2023 | Version v1
Project deliverable Open

MARVEL D3.6 - Efficient deployment of AI-optimised ML/DL models – final version


The objective of this deliverable is to provide an overview of the MARVEL Edge- to-Fog-Cloud framework, which serves as the deployment layer for the AI/DL MARVEL components. This framework encompasses the deployment logic that operates behind MARVdash, a proposed Kubernetes dashboard used to instantiate services as orchestrated containers and deploy them to desired execution sites, following an optimisation strategy. The primary aim of the optimisation strategy is to ensure that MARVEL components are deployed onto Kubernetes nodes based on their specific resource requirements and the available resources on the respective nodes. Furthermore, this document will outline methods for compressing machine learning algorithms/models by leveraging the resources present at the edge, such as reducing the size and operation time of deep learning models with millions of parameters. This compression approach can help minimise the computational overhead on edge servers.



Files (3.4 MB)

Name Size Download all
3.4 MB Preview Download

Additional details


MARVEL – Multimodal Extreme Scale Data Analytics for Smart Cities Environments 957337
European Commission