ParaQooba
=========

Download
--------

Zenodo DOI: 10.5281/zenodo.7554207

!!!!!

ATTENTION: This is a parallel solver! It runs also on a single core
machine, but in order to reproduce the speedups observed in the paper,
you need a multi-core setup!

!!!!!

Runtime of the minimal evaluation described below is around 5 hours.

All commands are to be interpreted from the root of this distribution! Please
`cd` into the directory that you unpacked from the distribution archive.

It is assumed that the distribution has been unpacked in `/home/tacas23/pq`.
This means, that the path of this readme file should be
`/home/tacas23/pq/Readme.txt`. Change into the root directory of the
distribution:

  cd ~/pq

Dependencies
------------

Install all required dependencies, which are also included in this distribution
using `dpkg -i`. If asked, the root password is `tacas23`.

  sudo dpkg -i deb/*.deb

Compiling
---------

Now, configure and compile paraqooba. It is stored in the directory
`paraqooba-master` and already extracted.

  cd paraqooba-master/build

Now, configure the build:

  cmake .. -DCMAKE_BUILD_TYPE=Release

And build:

  make

QuAPI needs to know where to find it's preload library. If you stay in the
build directory, there are no problems. To be safe, you should export the path
to your environment though. All provided scripts do the same:

  export QUAPI_PRELOAD_PATH=~/pq/paraqooba-master/build/_deps/quapi-build/libquapi_preload.so

Quick Paraqooba Demonstration
-----------------------------

For a quick demonstration, you can run paraqooba directly on some problems that
should complete quickly. We recommend the formula
`hex__symbolic_explicit_goal_Medium__hein_11_5x5-09.pg.qdimacs` for this, as
speedup can also be observed with just a few added cores. Run paraqooba like
this, while in the `~/pq/paraqooba-master/build` directory (possibly with less
cores, adjust the --worker argument downwards if you run into issues):

  time ./paraqs --quapisolver ~/pq/caqe ~/pq/rareqs --use-depqbf --worker $(expr $(nproc) / 2 + 1) /home/tacas23/pq/example-formulas/hex__symbolic_explicit_goal_Medium__hein_11_5x5-09.pg.qdimacs

As a comparison, you can run caqe, depqbf, and rareqs:

  time ~/pq/caqe /home/tacas23/pq/example-formulas/hex__symbolic_explicit_goal_Medium__hein_11_5x5-09.pg.qdimacs
  time ~/pq/depqbf /home/tacas23/pq/example-formulas/hex__symbolic_explicit_goal_Medium__hein_11_5x5-09.pg.qdimacs
  time ~/pq/rareqs /home/tacas23/pq/example-formulas/hex__symbolic_explicit_goal_Medium__hein_11_5x5-09.pg.qdimacs

You should observe a significant wall-clock time speedup. If you do not, please
make sure to give paracooba around 1/3 or 1/2 of your physical cores (meaning
no hyperthreads) using its `--worker` parameter, and possibly increase the
number of cores assigned to your virtual machine.

If you want to see what paracooba is doing, also apply `-d` (debug output) or
`-t` (trace output).

Minimal Benchmark Set
---------------------

Recommended as minimal benchmark set are the 25 benchmarks that were also
listed in the paper where paraqooba was faster to find a result. This also
works with less resources and you should be able to observe a speedup closely
correlating to the number of cores you assign to the virtual machine.

ATTENTION: Please assign as much virtual memory and cores as are
available to the virtual machine. Just one core, as in the default
configuration, is not enough to observe the described speedups. For
this, you need to re-start the virtual machine.

To start the minimal benchmark set, go into the directory
`minimal-benchmark-set` and run all the benchmarks using the provided
loop. This runs caqe, depqbf, rareqs and paraqooba on the formulas
with the most speedups, respecting hyperthreading. Single-threaded
solvers are run in parallel to speed the process up during review,
while paraqooba runs one at a time. Paraqooba uses nproc/2+1 workers
in this configuration to safeguard against unintended hyperthreading
effects. It is assumed that the virtual machine is assigned around 1/2
or 2/3 of the cores of a host machine.

  cd ~/pq/paraqooba-master/build/minimal-benchmark-set
  for d in * ; do cd $d; ./run.sh --progress ; cd ..; done

This will take several hours, depending on the CPUs and the assigned number of
cores.  We tested this with the VM having assigned 20 GB RAM and 8 cores. The
results of our test-run are in the
`~/pq/paraqooba-master/build/minimal-benchmark-set-pregen` directory. If you do
not want to re-run everything, run the following analysis commands in that
directory instead of `minimal-benchmark-set`.

After completion (or changing into the pregenerated directory), transform the
benchmark data into a database using:

  ~/pq/simsala/generate_sqlite.pl * ../data.db

And plot the speedup table as in the paper:

  sqlite3 ../data.db -table < ~/pq/speedup-pq-against-virtual-portfolio.sql

or as CSV:

  sqlite3 ../data.db -csv < ~/pq/speedup-pq-against-virtual-portfolio.sql

Full Benchmark Set
------------------

To replicate all benchmarks (the full benchmark set), you have to use machines
with 32 physical cores each and 256 GB memory. We provide evaluators the
scripts to generate the same `run.sh` scripts as used above to also run the
full benchmark set. To e.g. generate a `run.sh` for paraqooba with depqbf,
caqe, and rareqs for all hex formulas, use this:

  ~/pq/simsala/submit.pl --gnuparallel -e /home/tacas23/pq/paraqooba-master/build/paraqs -g "/home/tacas/pq/qbfeval/bloqqer/hex__*" -n pq --time=3700 --space 200000 --cpus_per_core=32 --runlim ~/pq/runlim --quapisolver ~/pq/caqe ~/pq/rareqs --use-depqbf > run.sh

Afterwards, run it like this:

  chmod +x run.sh
  ./run.sh --progress

Because running everything for an artifact evaluation might not be
possible, we include our original results as databases and their accompanying
scripts to generate figures in the `figures` subdirectory (`~/pq/figures`). If
required, execute the SQL scripts using `sqlite3 < SCRIPTNAME.sql` and plot the
`.gnuplot` files using `gnuplot FILENAME.gnuplot`. The generated `.tex` files
can be converted to PDFs using `pdflatex SOMETHING.tex`, but the texlive
distribution was not included in this artifact to save space.

Appendix
========

Dependencies List
-----------------

If any issues occur, this is the list of packages that have to be installed for
the provided scripts to work. Included for documentation.

  sudo apt-get install libboost-serialization-dev libboost-context-dev libboost-log-dev libboost-regex-dev libboost-program-options-dev libboost-coroutine-dev libboost-chrono-dev libboost-atomic-dev libboost-iostreams-dev libboost-date-time-dev libclass-dbi-sqlite-perl parallel sqlite3 gnuplot

Simsala
-------

The tool that generates `run.sh` files is also used to benchmark on our
cluster: Simsala.  http://simsala.pages.sai.jku.at/ as presented 2022 in RRRR:
https://qcomp.org/rrrr/2022/
