Software Open Access

Artifact for the OOPSLA'20 paper "Regex Matching with Counting-Set Automata"

Lukáš Holík; Ondřej Lengál; Olli Saarikivi; Lenka Turoňová; Margus Veanes; Tomáš Vojnar

Artifact for the paper "Regex Matching with Counting-Set Automata" (OOPSLA'20)

This is an artifact for the paper "Regex Matching with Counting-Set Automata" at OOPSLA'20.

This artifact is supposed to be run on the virtual machine Artifact Evaluation VM - Ubuntu 18.04 LTS available at . The recommended virtualization software is VirtualBox (we used version 6.1.12).

Please make sure to have at least 30 GiB allocated on your computer for the VM (the disc image will grow automatically). Let us warn you that running the (full) experiments on 1 CPU may take a time in the order of tens of hours and may cause your computer (in particular a laptop) to get hot (possibly overheat and turn off).

Note: see the file ~/howto_vbox_shared_folder.txt on how to set up a shared folder between the host and the guest OS (it is simple). It can make transferring of files from/to the VM easier.

Getting Started

Preparing VM

  1. Download the VM from and import it into VirtualBox (we recommend at least 8 GiB of memory per CPU (4 GiB might also work, though some experiments may terminate sooner due to out-of-memory) --- if you allocate more CPUs, the benchmarks will run in parallel ; it is also a good idea not to do other demanding things on your host OS while the experiments are runnning, otherwise the OSes will be fighting for RAM).
  2. Start the VM, turn on Terminal (in the left bar), enable network connection, and download the artifact zip file.


  1. Start the VM, turn on Terminal (in the left bar) and mount the shared folder according to ~/howto_vbox_shared_folder.txt.
  2. Copy the artifact zip file from the shared folder to $HOME. Then run the following:
unzip <artifact>.zip
cd <artifact>/

Installing Packages

Go to the root directory of the artifact and run

sudo ./

(the sudo password is "ae")

Take a walk (~20 minutes).

There might be some issues reported with installing some packages (some nasty stuff happens due to the need to update libc). The issues should not matter, since the installed tools can be used.

Preparing the Benchmarks

Download the dataset from , unzip it and copy to the right location (you may need to enable network connection).

wget '' -O
mv benchmark-cnt-set-automata/bench/* run/

Kicking the Tires

The following sequence of commands should check that everything is working and run a small subset of experiments, and generate a preliminary report.

cd run/
./               (prepares short version of experiments)
(take a walk ~20 mins)
cd ../results
firefox results.html

You should see a web page with incomplete results of the experiments (consider increasing the resolution of the VM).

Step by Step Instructions

Running the Full Experiments

cd run/

Take a long walk (possibly a trip Paris or any other place that you have always wanted to visit --- this may take a few tens of hours, based on your setup, so you may even manage to leave the quarantine before the experiments finish ;-) --- seriously, it might take two or three days ; you can, however, save the state of the VM and restore it later to continue with the experiments). You can change the timeout in run/ to obtain partial results faster or remove some lines from run/bench-*.txt.

Processing the Results of Experiments

Before viewing the results, we recommend to change the resolution of the VM to a higher one.

(in run/)

cd ../results/
firefox results.html

Supported Claims

The artifact reproduces the following parts of the paper:

  1. Fig. 5
  2. Table 1

Since the machine running the artifact will most probably differ from the one we used to run the experiments, exact times, numbers of timeouts, etc. will most probably differ, but the trends should stay the same.

Extra Notes

Installing Outside of the Provided VM

It should not be difficult to set up the environment on a Linux OS reasonably close to the one in the referenced VM. The needed Linux packages are

mono (version at least 5.*)

Python packages:


R packages:


You can follow the commands in the installation script to see what needs to be done.

Running Other Experiments

The experiments to run are stored in the run/bench-*.txt files, in a CSV-like format pattern;input-file where pattern can use escape characters as used in CSVs (compatible with Python's csv module). If you have a file FILE with your own benchmarks, you can run the following command in the run/ directory:

cat FILE | ./pycobench -t TIMEOUT -o OUTPUT pattern_match.yaml

where TIMEOUT is the timeout (in seconds) and OUTPUT is a file that logs results of experiments. See ./pycobench -h for more details. ./pycobench by default runs every benchmark (i.e. a line in FILE) with all regex matchers as defined in run/pattern_match.yaml (the default definition runs them in the mode where they count the number of matching lines).

When the command finishes, you need to process the output to collect the runtimes and numbers of matches to a format where there is single line for every benchmarks using the following commands:

cat OUTPUT | ./ | ./ > results.csv

You can import the resulting CSV file in a spreadsheet editor. Note that there might be some problems with delimiters (such as ";" in the regexes), so you might first consider sanitizing the CSV to get rid of regexes by the ./ script.

Files (292.9 MB)
Name Size
292.9 MB Download
All versions This version
Views 104104
Downloads 2121
Data volume 6.1 GB6.1 GB
Unique views 9292
Unique downloads 2121


Cite as