Published August 12, 2025 | Version v1
Publication Open

LLMs in Security Testing and Monitoring: An Initial Study

  • 1. ROR icon Montimage (France)

Description

Cyber threats are becoming increasingly complex, causing
traditional security systems to struggle in keeping up and highlighting
the need for advanced solutions. Large Language Models (LLMs), such
as OpenAI’s ChatGPT and Meta AI’s LLaMA, have shown great potential
to transform cybersecurity workflows with their abilities in natural
language understanding, pattern recognition, and automated reasoning.
These models are particularly promising for tasks like network
monitoring, threat detection, and security alert triage. However, challenges
related to the reliability of outputs, adversarial risks, and ethical
concerns must be addressed. This paper presents a comprehensive survey
of LLM-based approaches for security testing and evaluates three
open-access LLMs, including Mistral-7B, Qwen3-8B, and Llama3.1-8B,
demonstrating their ability to enhance security alert analysis. Our findings
suggest that LLMs can improve alert clarity and usability, making
them more accessible to non-experts while providing valuable insights
for developers.

Files

LLM_4_STAM2025.pdf

Files (830.0 kB)

Name Size Download all
md5:b6ec547200b57f957a98c70fb80f7af6
830.0 kB Preview Download

Additional details

Funding

European Commission
AI4CYBER - Trustworthy Artificial Intelligence for Cybersecurity Reinforcement and System Resilience 101070450