Published November 17, 2022 | Version v1
Conference paper Open

Fed-XAI: Federated Learning of Explainable Artificial Intelligence Models

Description

The current era is characterized by an increasing pervasiveness of applications and services based on data processing and often built on Artificial Intelligence (AI) and, in particular, Machine Learning (ML) algorithms. In fact, extracting insights from data is so common in daily life of individuals, companies, and public entities and so relevant for the market players, to become an important matter of interest for institutional organizations. The theme is so relevant that ad hoc regulations have been proposed. One important aspect is given by the capability of the applications to tackle the data privacy issue. Additionally, depending on the specific application field, paramount importance is given to the possibility for the humans to understand why a certain AI/ML-based application is providing that specific output. In this paper, we discuss the concept of Federated Learning of eXplainable AI (XAI) models, in short FED-XAI, purposely designed to address these two requirements simultaneously. AI/ML models are trained with the simultaneous goals of preserving the data privacy (Federated Learning (FL) side) and ensuring a certain level of explainability of the system (XAI side). We first introduce the motivations at the foundation of FL and XAI, along with their basic concepts; then, we discuss the current status of this field of study, providing a brief survey regarding approaches, models, and results. Finally, we highlight the main future challenges.

Files

paper8.pdf

Files (421.4 kB)

Name Size Download all
md5:150d5f12db348f332b417efd46b30f7b
421.4 kB Preview Download

Additional details

Funding

European Commission
Hexa-X – A flagship for B5G/6G vision and intelligent fabric of technology enablers connecting human, physical, and digital worlds 101015956