Published May 15, 2024 | Version v1
Conference proceeding Open

Interpreting Intrusions - The Role of Explainability in AI-Based Intrusion Detection Systems

  • 1. ITTI
  • 2. Bydgoszcz University of Science and Technology
  • 3. University of Warsaw

Description

Machine learning has become a key component of the effective detection of network intrusions. Yet, it comes with the lack of transparency - an issue which can be mitigated with the employment of explainable AI techniques. In this paper, the crucial role of explainability in intrusion detection is discussed, along with its benefits and drawbacks, followed by presenting and comparing the results of four main explainability techniques applied to an intrusion detection system.

Files

ZENODO__Interpreting_Intrusions___The_Role_of_Explainability_in_AI_Based_Intrusion_Detection_Systems__pv_.pdf

Additional details

Funding

European Commission
AI4CYBER - Trustworthy Artificial Intelligence for Cybersecurity Reinforcement and System Resilience 101070450