Explainable Artificial Intelligence In Intrusion Detection Systems
Authors/Creators
- 1. MVP Samaj's K. K. Wagh Art's, Science & Commerce College, Pimpalgaon (Baswant), Nashik, Maharashtra, India.
Description
With the rapid expansion of computer networks, cloud infrastructures, and Internet of Things (IoT) environments, the number and complexity of cyber-attacks have increased significantly. Intrusion Detection Systems (IDS) play a crucial role in identifying malicious activities and protecting network resources. Traditional IDS techniques, including signature-based and anomaly-based systems, face challenges such as high false-positive rates, poor adaptability to new attacks, and limited interpretability.
Recently, Artificial Intelligence (AI) and Machine Learning (ML) techniques have been widely adopted in IDS due to their high detection accuracy and ability to analyze large volumes of network data. However, most AI-based IDS models operate as “black boxes,” making it difficult for security analysts to understand how and why a particular decision is made. This lack of transparency reduces trust, limits practical deployment, and creates challenges in regulatory compliance.
Explainable Artificial Intelligence (XAI) addresses these issues by providing human-interpretable explanations for AI decisions. This research focuses on the integration of XAI techniques into IDS to enhance transparency, trust, and decision-making capability while maintaining strong detection performance. The proposed approach combines machine learning-based intrusion detection with explainability methods such as feature importance and rule-based explanations. Experimental evaluation demonstrates that XAI-enabled IDS improves security analysis, accountability, and auditability without significantly compromising performance.
Files
071360.pdf
Files
(641.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:8c424dcff5f4c7b438a069c7faa9b5db
|
641.8 kB | Preview Download |