Published June 5, 2024 | Version v1
Conference paper Open

Towards Accountable and Resilient AI-AssistedNetworks: Case Studies and Future Challenges

Description

Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its “black-box” decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and
robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI
network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks. Index Terms—AI, Security, Privacy, Explainability, Malware, Traffic Classification, Federated Learning

Files

EuCNC_2024___SPATIAL_Process__WP2_.pdf

Files (874.6 kB)

Name Size Download all
md5:2c71daae3f19956fe72883cba36bfef1
874.6 kB Preview Download