Published May 15, 2024
| Version v1
Conference proceeding
Open
Interpreting Intrusions - The Role of Explainability in AI-Based Intrusion Detection Systems
Creators
- 1. ITTI
- 2. Bydgoszcz University of Science and Technology
- 3. University of Warsaw
Description
Machine learning has become a key component of the effective detection of network intrusions. Yet, it comes with the lack of transparency - an issue which can be mitigated with the employment of explainable AI techniques. In this paper, the crucial role of explainability in intrusion detection is discussed, along with its benefits and drawbacks, followed by presenting and comparing the results of four main explainability techniques applied to an intrusion detection system.
Files
ZENODO__Interpreting_Intrusions___The_Role_of_Explainability_in_AI_Based_Intrusion_Detection_Systems__pv_.pdf
Files
(239.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:ced07a2dc5913f7b4ea7e5ec02108e01
|
239.3 kB | Preview Download |