Published June 6, 2025 | Version 1.0
Conference proceeding Open

SoK: Automated TTP Extraction from CTI Reports – Are We There Yet?

Description

# SoK: Automated TTP Extraction from CTI Reports – Are We There Yet?

This repository contains the code and all datasets for the experiments conducted in the paper.

Abstract: Cyber Threat Intelligence (CTI) plays a critical role in sharing knowledge about new and evolving threats. With the increased prevalence and sophistication of threat actors, intelligence has expanded from simple indicators of compromise to extensive CTI reports describing high-level attack steps known as Tactics, Techniques and Procedures (TTPs). Such TTPs, often classified into the ontology of the MITRE ATT&CK framework, make CTI significantly more valuable, but also harder to interpret and automatically process. Natural Language Processing (NLP) makes it possible to automate large parts of the knowledge extraction from CTI reports; over 40 papers discuss approaches, ranging from named entity recognition over embedder models to generative large language models. Unfortunately, existing solutions are largely incomparable as they consider decisively different and constrained settings, rely on custom TTP ontologies, and use a multitude of custom, inaccessible CTI datasets. We take stock, systematize the knowledge in the field, and empirically evaluate existing approaches in a unified setting for fair comparisons. We gain several fundamental insights, including (1) the finding of a kind of performance limit that existing approaches seemingly cannot overcome as of yet, (2) that traditional NLP approaches (possibly counterintuitively) outperform modern embedder-based and generative approaches in realistic settings, and (3) that further research on understanding inherent ambiguities in TTP ontologies and on the creation of qualitative datasets is key to take a leap in the field.

 

The repository is structured as follows:  
* `NER`: Contains the code related to the NER approaches (Sections 2.2, 4).
* `classification`: Contains related to the Classification approaches (Sections 2.3, 5)
* `gLLM`: Contains the code related to the Generation approaches (Sections 2.4, 6)
* `scraping`: Contains the code for collecting papers from DBLP and Google Scholar, following the description of Section 2.1
* `ext_tools`: Contains the code of the comparison experiment presented in Appendix A.1, with the implementations of three state-of-the-art approaches.
* `datasets`: Contains the used datasets MITRE TRAM2 with our proposed split, Bosch AnnoCTR dataset, Augmented TRAM2 dataset and the corresponding instruction datasets to train the generative LLM in natural language.

## Setup
Each folder (NER, classification, gLLM) corresponds to a *sub-project* and, therefore, requires its own setup. To use the code, navigate inside the corresponding folder and follow the instructions listed in the corresponding `README.md`.

## Notes

* Throughout this repository, the Bosch AnnoCTR dataset is often referred as *"bosch"*.

Files

classification.zip

Files (33.1 GB)

Name Size Download all
md5:be33c3907406040b1a862881fa9cb8b8
29.6 MB Preview Download
md5:ac245d2802dedc19f2ab5f23acb11db1
30.0 MB Preview Download
md5:d5bcd9d4651fbf563daf4ff10fe46df7
17.6 MB Preview Download
md5:c3002adf87e4689e1f642da9aa34bfd7
44.8 MB Preview Download
md5:a8b3d33015e7e06f17b56383f40e05ad
262.6 kB Preview Download
md5:649679f24e7e06b63b86662388e9e3fd
33.0 GB Preview Download
md5:0bacf08fa9c99136b4db98ee1df6811f
4.6 kB Preview Download

Additional details

Dates

Accepted
2025-06-06

Software

Programming language
Python
Development Status
Concept