Published January 29, 2020 | Version 1.0
Dataset Open

Results of the 9th International Competition on Software Verification (SV-COMP 2020)

  • 1. LMU Munich, Germany

Description

This archive contains the results of the
9th Competition on Software Verification (SV-COMP 2020)
https://sv-comp.sosy-lab.org/2020/

The competition was run by Dirk Beyer, LMU Munich, Germany.

Copyright (C) Dirk Beyer
https://www.sosy-lab.org/people/beyer/

SPDX-License-Identifier: CC-BY-4.0
https://spdx.org/licenses/CC-BY-4.0.html


To browse the competition results with a web browser, there are two options:
- start a local web server using
  php -S localhost:8000
  in order to view the data in this archive, or
- browse https://sv-comp.sosy-lab.org/2020/results/
  in order to view the data on the SV-COMP web page.


Contents:

index.html          directs to the overview web page
LICENSE.txt         specifies the license
README.txt          this file
results-validated/  results of validation runs
results-verified/   results of verification runs and aggregated results


The folder results-validated/ contains the results from validation runs:

- *.xml.bz2         XML results from BenchExec
- *.logfiles.zip    output from tools
- *.json.gz         mapping from files names to SHA 256 hashes for the file content


The folder results-verified/ contains the results from verification runs and aggregated results:

index.html               overview web page with rankings and score table
*.xml.bz2                XML results from BenchExec
*.merged.xml.bz2         XML results from BenchExec, status adjusted according to the validation results
*.logfiles.zip           output from tools
*.json.gz                mapping from files names to SHA 256 hashes for the file content
*.xml.bz2.table.html     HTML views on the detailed results data as generated by BenchExec's table generator
*.All.table.html         HTML views of the full benchmark set (all categories) for each tool
META_*.table.html        HTML views of the benchmark set for each meta category for each tool, and over all tools
<category>*.table.html   HTML views of the benchmark set for each category over all tools
iZeCa0gaey.html          HTML views per tool
validatorStatistics.html Statictics of the validator runs

quantilePlot-*           score-based quantile plots as visualization of the results
quantilePlotShow.gp      example Gnuplot script to generate a plot
score*                   accumulated score results in various formats


The hashes of the files (in the files *.json.gz) are useful for
- validating the exact contents of a file and
- accessing the files from the witness store.

The witness store from SV-COMP 2020 is available at: https://doi.org/10.5281/zenodo.3630187
The verification tasks, version svcomp20, are available at: https://doi.org/10.5281/zenodo.3633334
BenchExec, version 2.5.1, is available at: https://doi.org/10.5281/zenodo.3574420

All benchmarks were executed
for SV-COMP 2020, https://sv-comp.sosy-lab.org/2020/
by Dirk Beyer, LMU Munich
based on the components
git@github.com:sosy-lab/sv-benchmarks.git  testcomp20-freeze-1-g2518814029
git@github.com:sosy-lab/sv-comp.git  svcomp20
git@github.com:sosy-lab/benchexec.git  2.5


Feel free to contact me in case of questions:
https://www.sosy-lab.org/people/beyer/

 

Files

svcomp20-results.zip

Files (4.4 GB)

Name Size Download all
md5:93ba65757bf3564a418342f429ee9c6c
4.4 GB Preview Download