Results of the 14th Intl. Competition on Software Verification (SV-COMP 2025)
Authors/Creators
Description
SV-COMP 2025
Competition Results
This file describes the contents of an archive of the 14th Competition on Software Verification (SV-COMP 2025). https://sv-comp.sosy-lab.org/2025/
The competition was organized by Dirk Beyer, LMU Munich, Germany and Jan Strejček, Masaryk University, Czechia. More information is available in the following article: Dirk Beyer and Jan Strejček. Improvements in Software Verification and Witness Validation: SV-COMP 2025. In Proceedings of the 31st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2025, Hamilton, Canada, May 3–8), 2024. Springer. doi:10.1007/978-3-031-90660-2_9
Copyright (C) 2025 Dirk Beyer and Jan Strejček https://www.sosy-lab.org/people/beyer/ https://www.fi.muni.cz/~xstrejc/
SPDX-License-Identifier: CC-BY-4.0 https://spdx.org/licenses/CC-BY-4.0
To browse the competition results with a web browser, there are two options:
- start a local web server using php -S localhost:8000 in order to view the data in this archive, or
- browse https://sv-comp.sosy-lab.org/2025/results/ in order to view the data on the SV-COMP web page.
Contents
index.html: directs to the overview web pages of the verification and validation trackLICENSE-results.txt: specifies the licenseREADME-results.txt: this fileresults-validated/: results of validation runsresults-verified/: results of verification runs
The folder results-validated/ contains the results from validation runs:
-
index.html: overview web page with rankings and score table -
design.css: HTML style definitions -
*.results.txt: TXT results from BenchExec -
*.xml.bz2: XML results from BenchExec -
*.fixed.xml.bz2: XML results from BenchExec, status adjusted according to the validation results -
*.logfiles.zip: output from tools -
*.json.gz: mapping from files names to SHA 256 hashes for the file content -
<validator>*.table.html: HTML views of the full benchmark set (all categories) for each validator -
<category>*.table.html: HTML views of the benchmark set for each category over all validators -
*.xml: XML table definitions for the above tables -
validators.*: Statistics of the validator runs (obsolete) -
correctness: Infix for validation of correctness witnesses -
violation: Infix for validation of violation witnesses -
1.0: Infix for validation of v1.0 witnesses -
2.0: Infix for validation of v2.0 witnesses -
quantilePlot-*: score-based quantile plots as visualization of the results -
quantilePlotShow.gp: example Gnuplot script to generate a plot -
score*: accumulated score results in various formats -
witness-database.csv: data base of all witnesses -
witness-classification.csv: data base of all witnesses with their classification into correct, wrong, unknown
The folder results-verified/ contains the results from verification runs and aggregated results:
-
index.html: overview web page with rankings and score table -
design.css: HTML style definitions -
*.results.txt: TXT results from BenchExec -
*.xml.bz2: XML results from BenchExec -
*.fixed.xml.bz2: XML results from BenchExec, status adjusted according to the validation results -
*.logfiles.zip: output from tools -
*.json.gz: mapping from files names to SHA 256 hashes for the file content -
*.xml.bz2.table.html: HTML views on the detailed results data as generated by BenchExec’s table generator -
<verifier>*.table.html: HTML views of the full benchmark set (all categories) for each verifier -
META_*.table.html: HTML views of the benchmark set for each meta category for each verifier, and over all verifiers -
<category>*.table.html: HTML views of the benchmark set for each category over all verifiers -
*.xml: XML table definitions for the above tables -
results-per-tool.php: List of results for each tool for review process in pre-run phase -
<verifier>.list.html: List of results for a tool in HTML format with links -
quantilePlot-*: score-based quantile plots as visualization of the results -
quantilePlotShow.gp: example Gnuplot script to generate a plot -
score*: accumulated score results in various formats
The hashes of the file names (in the files *.json.gz) are useful for
- validating the exact contents of a file and
- accessing the files from the witness store.
Related Archives
Overview of archives from SV-COMP 2025 that are available at Zenodo:
- https://doi.org/10.5281/zenodo.15012077 Verification Witnesses from SV-COMP 2025 Verification Tools. Witness store (containing the generated verification witnesses)
- https://doi.org/10.5281/zenodo.15055359 Verifiers and Validators: FM-Tools Data Set for SV-COMP 2025. Metadata snapshot of the evaluated tools (DOIs, options, etc.)
- https://doi.org/10.5281/zenodo.15012085 Results of the 14th Intl. Competition on Software Verification (SV-COMP 2025). Results (XML result files, log files, file mappings, HTML tables)
- https://doi.org/10.5281/zenodo.15012096 SV-Benchmarks: Benchmark Set of SV-COMP 2025. Verification tasks, version svcomp24
- https://doi.org/10.5281/zenodo.15007216 BenchExec, version 3.29. Benchmarking framework
All benchmarks were executed for SV-COMP 2025 https://sv-comp.sosy-lab.org/2025/ by Dirk Beyer, LMU Munich, based on the following components:
- https://gitlab.com/sosy-lab/benchmarking/fm-tools 2.2
- https://gitlab.com/sosy-lab/benchmarking/sv-benchmarks svcomp25
- https://gitlab.com/sosy-lab/sv-comp/bench-defs svcomp25
- https://gitlab.com/sosy-lab/software/benchexec 3.29
- https://gitlab.com/sosy-lab/software/benchcloud 1.3.0
- https://gitlab.com/sosy-lab/benchmarking/sv-witnesses 2.0.3
- https://gitlab.com/sosy-lab/software/coveriteam 1.2.1
- https://gitlab.com/sosy-lab/benchmarking/competition-scripts svcomp25
Contact
Feel free to contact me in case of questions: https://www.sosy-lab.org/people/beyer/
Files
svcomp25-results.zip
Files
(54.4 GB)
| Name | Size | Download all |
|---|---|---|
|
md5:40d85be9ce04778ff4d8ce880b276e39
|
54.4 GB | Preview Download |
Additional details
Related works
- Is supplement to
- Conference paper: 10.1007/978-3-031-90660-2_9 (DOI)