Published July 10, 2023 | Version v1
Conference paper Open

Study on Adversarial Attacks Techniques, Learning Methods and Countermeasures- Application to Anomaly Detection

Description

Adversarial attacks on AI systems are designed to exploit vulnerabilities in the AI algorithms that can be used
to manipulate the output of the system, resulting in incorrect or harmful behavior. They can take many forms,
including manipulating input data, exploiting weaknesses in the AI model, and poisoning the training samples
used to develop the AI model. In this paper, we study different types of adversarial attacks, including evasion,
poisoning, and inference attacks, and their impact on AI-based systems from different fields. A particular
emphasis is placed on cybersecurity applications, such as Intrusion Detection System (IDS) and anomaly
detection. We also depict different learning methods that allow us to understand how adversarial attacks work
using eXplainable AI (XAI). In addition, we discuss the current state-of-the-art techniques for detecting and
defending against adversarial attacks, including adversarial training, input sanitization, and anomaly detection.
Furthermore, we present a comprehensive analysis of the effectiveness of different defense mechanisms against
different types of adversarial attacks. Overall, this study provides a comprehensive overview of challenges and
opportunities in the field of adversarial machine learning, and serves as a valuable resource for researchers,
practitioners, and policymakers working on AI security and robustness. An application for anomaly detection,
especially malware detection is presented to illustrate several concepts presented in the paper.

Files

icsoft2023.pdf

Files (268.3 kB)

Name Size Download all
md5:255d29e1c3bd7ce22b059919b014d526
268.3 kB Preview Download

Additional details

Dates

Available
2023-07-10