Replication package for article 'Ranged Program Analysis via Instrumentation'
- 1. University of Oldenburg, Department of Computing Science, Oldenburg, Germany
- 2. LMU Munich, Munich, Germany
Description
This artifact contains everything for reproducting the findings from the article Ranged Program Analysis via Instrumentation
Abstract. Ranged program analysis has recently been proposed as a
means to scale a single analysis and to define parallel cooperation of
different analyses. To this end, ranged program analysis first splits a
program’s paths into different parts. Then, it runs one analysis instance
per part, thereby restricting the instance to analyze only the paths of
the respective part. To achieve the restriction, the analysis is combined
with a so-called range reduction component responsible for excluding the
paths outside of the part.
So far, ranged program analysis and in particular the range reduction
component have been defined in the framework of configurable program
analysis (CPA). In this paper, we suggest program instrumentation as
an alternative for achieving the analysis restriction, which allows us to
use arbitrary analyzers in ranged program analysis. Our evaluation on
programs from the SV-COMP benchmark shows that ranged program
analysis with instrumentation performs comparably to the CPA-based
version and that the evaluation results for the CPA-based ranged pro-
gram analysis carry over to the instrumentation-based version.
This artifact for the article "Ranged Program Analysis via Instrumentation", Paper ID 7
main purpose is to replicate the claims from the paper.
We apply for the badges Artifact Available and Artifact Reusable (and thus Artifact Functional)
In this paper, we propose ranged program analysis via instrumentation to be able to use arbitrary off-the-shelf tools within ranged program analysis.
This allows us to not only parallelize a single analysis, but also run different analyses on different ranges of a program in parallel.
In our paper, we answer three research questions.
Therefore, we conducted multiple experiments to analyze the effectiveness
and the efficiency of different compositions of ranged analyses in comparison
with existing analyses on 10229 C-tasks from the SV-Benchmark collection.
The artifact contains the raw data collected during the execution of the experiments
with Benchexec (the CPU and wall time per run, the tool log produced, and the final verdict computed by each tool)
as well as our implementation.
The provided artifact is reusable,
as we provide a simple and easy way to use off-the-shelf tools within ranged analysis.
Tool developers only need to create a Benchexec and CoVeriTeam configuration for their tool to use it within ranged program analysis.
In addition, we include all evaluation scripts as Juypter Notebooks
to reproduce the findings from the paper, (e.g. the number of tasks solver per configuration,
or the time used by a composition of ranged analyses in comparison to a default version).
As re-running all experiments takes several months on a single machine,
we additionally created a much smaller benchmark set containing 5 files for
reusing the artifact to reproduce the findings from the paper on a small scale.
All relevant information on how to test and reproduce the results can be found in the README.html,
located at the folder `documents` in the VM.
Files
Files
(11.3 GB)
Name | Size | Download all |
---|---|---|
md5:0115d3f1454ff0353d63e950171a1365
|
11.3 GB | Download |