Toward a more transparent and explainable conflict resolution algorithm for air traffic controllers
Creators
- 1. Ecole Nationale de l'aviation Civile
- 2. Mälardalen University
- 3. Deep Blue
- 4. Sapienza University of Rome
Description
Recently, Artificial intelligence (AI) algorithms have received increasable interest in various application domains including in Air Transportation Management (ATM). Different AI in particular Machine Learning (ML) algorithms are used to provide decision support in autonomous decision-making tasks in the ATM domain e.g., predicting air transportation traffic and optimizing traffic flows. However, most of the time these automated systems are not accepted or trusted by the intended users as the decisions provided by AI are often opaque, non-intuitive and not understandable by human operators. Safety is the major pillar to air traffic management, and no black box process can be inserted in a decision-making process when human life is involved. To address this challenge related to transparency of the automated system in the ATM domain, we investigated AI methods in predicting air transportation traffic conflict and optimizing traffic flows based on the domain of Explainable Artificial Intelligence (XAI). Here, AI models’ explainability in terms of understanding a decision i.e., post hoc interpretability and understanding how the model works i.e., transparency can be provided for air traffic controllers. In this paper, we report our research directions and our findings to support better decision making with AI algorithm with extended transparency.
Files
EAAP_Artimation_Final-1.pdf
Files
(952.4 kB)
Name | Size | Download all |
---|---|---|
md5:8d07f4cc0f596d15f50b236ecc22bf95
|
952.4 kB | Preview Download |