D4.1 - Initial Release and Evaluation of the Security Tools
Creators
- 1. Technische Universität Dresden
- 2. Politecnico di Milano
Contributors
Project members:
- 1. Technische Universität Dresden
- 2. Politecnico di Milano
Description
This document describes the first release and evaluation of the security tools developed by the AI-SPRINT project, while the second release and evaluation of the security tools are due at M24.
The focus of this document is to describe the components involved in this first release, which harden the continuous deployment and programming framework runtime developed in WP3 with regards to several security related aspects, as well as the results of the preliminary tests on the technologies employed that support the design decisions.
In this document, we first describe an example application, i.e., federated learning, which we will reference throughout this document in order to describe the different security tools we developed within AI-SPRINT and how they ensure confidentiality as well as integrity in this context.
Although federated learning provides a certain degree of confidentiality by default, as each party is performing the learning only locally on premise and sharing only model parameters with its other collaborators, there are still many aspects that require tooling around existing federated learning frameworks in order to guarantee the privacy constraints we put forth in the AI-SPRINT project.
After the introduction of the federated learning example application, we review the threat model we consider in AI-SPRINT as well as possible attacks and will refer to them later in the policy definitions that application developers and users will use to define their requirements.
Next, the document describes systematically the different protection goals stemming from the previous analysis of threats typically found in the context of AI applications running in cloud environments as well as on edge devices. For the policy definitions, which are based on the previously defined protection goals, a policy language using YAML file syntax is presented.
The policy language can be used in two ways, i.e., developers and application users can either define the requirements with respect to security either by stating their protection goals or by explicitly selecting the protection mechanism that should be enabled when the application is being deployed and run on the distributed infrastructure.
In addition to the previously declared policy description representing the protection goals as well as protection mechanisms, we then review how security measures can be fine-tuned using the so-called SCONE session language.
After the introduction of the policy language, the document describes the different security tools we implemented within the scope of AI-SPRINT starting from the overall objectives in order to provide confidential compute following with a thorough description of components that fulfil these objectives.
The tools we present range from the so-called SCONE runtime, the cross compiler as well as the sconify tool that allows developers to turn native applications into confidential ones. We also present the CHIMA framework for deploying network functions, in particular security functions, to programmable switches and a blockchain-based mechanism for authorizing access to data collected from devices in the field.
Finally, a few micro benchmarks are being presented evaluating the performance of various aspects of the system. The deliverable concludes with a summary of achievements.
Files
D4.1 - Initial release and evaluation of the security tools final.pdf
Files
(3.8 MB)
Name | Size | Download all |
---|---|---|
md5:54f11d0ee5b19dfa659e6f7b36147e28
|
3.8 MB | Preview Download |
Additional details
Funding
References
- Project deliverable
- Open Access