Published April 25, 2026 | Version v2
Dataset Open

Results of the 15th Intl. Competition on Software Verification (SV-COMP 2026)

  • 1. ROR icon Ludwig-Maximilians-Universität München
  • 2. ROR icon Masaryk University

Description

SV-COMP 2026

Changelog

Version 2 of this archive (from 2026-04-25)

  • contains the CSV files, which were missing in version 1,
  • suppresses the counts of correct_true and correct_false for the categories C.FalseOverall and C.TrueOverall, respectively, in the RSF file and CSV files,
  • usees two columns in the legends for quantile plots if there are many tools, and
  • other minor fixes.

Competition Results

This file describes the contents of an archive of the 15th Competition on Software Verification (SV-COMP 2026). https://sv-comp.sosy-lab.org/2026/

The log files of the validation track are available in this archive: 10.5281/zenodo.18653053.

The competition was organized by Dirk Beyer, LMU Munich, Germany and Jan Strejček, Masaryk University, Czechia. More information is available in the following article: Dirk Beyer and Jan Strejček. Evaluating Software Verifiers for C, Java, and SV-LIB (Report on SV-COMP 2026). In Proceedings of the 32nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2026, Turin, Italy, April 11–16), 2026. Springer.

Copyright (C) 2026 Dirk Beyer and Jan Strejček https://www.sosy-lab.org/people/beyer/ https://www.fi.muni.cz/~xstrejc/

SPDX-License-Identifier: CC-BY-4.0 https://spdx.org/licenses/CC-BY-4.0

To browse the competition results with a web browser, there are two options:

Contents

  • index.html: directs to the overview web pages of the verification and validation track
  • LICENSE-results.txt: specifies the license
  • README-results.txt: this file
  • results-validated/: results of validation runs
  • results-verified/: results of verification runs

The folder results-validated/ contains the results from validation runs:

  • index.html: overview web page with rankings and score table

  • design.css: HTML style definitions

  • *.results.txt: TXT results from BenchExec

  • *.xml.bz2: XML results from BenchExec

  • *.fixed.xml.bz2: XML results from BenchExec, status adjusted according to the validation results

  • *.csv: CSV results

  • *.fixed.csv: CSV results, status adjusted according to the validation results

  • *.logfiles.zip: output from tools

  • *.json.gz: mapping from files names to SHA 256 hashes for the file content

  • <validator>*.table.html: HTML views of the full benchmark set (all categories) for each validator

  • <category>*.table.html: HTML views of the benchmark set for each category over all validators

  • *.xml: XML table definitions for the above tables

  • validators.*: Statistics of the validator runs (obsolete)

  • correctness: Infix for validation of correctness witnesses

  • violation: Infix for validation of violation witnesses

  • 1.0: Infix for validation of v1.0 witnesses

  • 2.0: Infix for validation of v2.0 witnesses

  • quantilePlot-*: score-based quantile plots as visualization of the results

  • quantilePlotShow.gp: example Gnuplot script to generate a plot

  • score*: accumulated score results in various formats

The folder results-verified/ contains the results from verification runs and aggregated results:

  • index.html: overview web page with rankings and score table

  • design.css: HTML style definitions

  • *.results.txt: TXT results from BenchExec

  • *.xml.bz2: XML results from BenchExec

  • *.fixed.xml.bz2: XML results from BenchExec, status adjusted according to the validation results

  • *.logfiles.zip: output from tools

  • *.json.gz: mapping from files names to SHA 256 hashes for the file content

  • *.xml.bz2.table.html: HTML views on the detailed results data as generated by BenchExec’s table generator

  • <verifier>*.table.html: HTML views of the full benchmark set (all categories) for each verifier

  • META_*.table.html: HTML views of the benchmark set for each meta category for each verifier, and over all verifiers

  • <category>*.table.html: HTML views of the benchmark set for each category over all verifiers

  • *.xml: XML table definitions for the above tables

  • results-per-tool.php: List of results for each tool for review process in pre-run phase

  • <verifier>.list.html: List of results for a tool in HTML format with links

  • quantilePlot-*: score-based quantile plots as visualization of the results

  • quantilePlotShow.gp: example Gnuplot script to generate a plot

  • score*: accumulated score results in various formats

The hashes of the file names (in the files *.json.gz) are useful for

  • validating the exact contents of a file and
  • accessing the files from the witness store.

Related Archives

Overview of archives from SV-COMP 2026 that are available at Zenodo:

All benchmarks were executed for SV-COMP 2026 https://sv-comp.sosy-lab.org/2026/ by Dirk Beyer, LMU Munich, based on the following components:

Contact

Feel free to contact me in case of questions: https://www.sosy-lab.org/people/beyer/

Files

svcomp26-results.zip

Files (29.5 GB)

Name Size Download all
md5:61aa870303d56a2f288727aa46dbfefb
29.5 GB Preview Download