Published December 4, 2024 | Version v1
Journal Open

Vulnerability detection using BERT based LLM model with transparency obligation practice towards trustworthy AI

Description

Vulnerabilities in the source code are one of the main causes of potential threats in software-intensive systems.
There are a large number of vulnerabilities published each day, and effective vulnerability detection is critical to
identifying and mitigating these vulnerabilities. AI has emerged as a promising solution to enhance vulnerabilitydetection, offering the ability to analyse vast amounts of data and identify patterns indicative of potential threats.
However, AI-based methods often face several challenges, specifically when dealing with large datasets and
understanding the specific context of the problem. Large Language Model (LLM) is now widely considered to
tackle more complex tasks and handle large datasets, which also exhibits limitations in terms of explaining the
model outcome and existing works focus on providing overview of explainability and transparency. This research introduces a novel transparency obligation practice for vulnerability detection using BERT based LLMs. We address the black-box nature of LLMs by employing XAI techniques, unique combination of SHAP, LIME, heat map. We propose an architecture that combines the BERT model with transparency obligation practices, which ensures the assurance of transparency throughout the entire LLM life cycle.

Files

vulnerability.pdf

Files (2.8 MB)

Name Size Download all
md5:aa645aec4c0e1f438b7661af93f30b45
2.8 MB Preview Download

Additional details