AI Security Paper
Academic Research Paper · January 2026
This research paper analyzes evasion attacks on deep learning models in intrusion detection systems (IDS), with a particular focus on log-based anomaly detection. It explores how attackers manipulate input data to bypass machine learning models during inference time and evaluates different attack and defense strategies.
The work introduces key concepts of adversarial machine learning and distinguishes between poisoning and evasion attacks. A major focus lies on evasion attacks, which exploit model decision boundaries by applying minimal perturbations to input data in order to achieve misclassification without modifying the training process.
Different attack strategies are analyzed, including gradient-based methods such as FGSM, PGD, and Carlini & Wagner, as well as black-box approaches like reinforcement learning and GAN-based attacks. These methods demonstrate that even highly accurate models can be vulnerable to adversarial manipulation.
The paper further evaluates defense mechanisms such as adversarial training, ensemble models, and input transformations. The results show that no single defense is sufficient, and that a defense-in-depth strategy is required to improve robustness in real-world systems.
Overall, the research highlights a critical trade-off between accuracy and robustness and emphasizes that modern intrusion detection systems must be designed with security considerations beyond pure performance.