______________________________ L# LEARNING LIBRARY ARTIFACT Bharat Garhewal ______________________________ Table of Contents _________________ 1. Setup 2. What does this artifact contain? 3. Running experiments. 1 Setup ======= Please unzip the zip file in the home directory (to be specific, the "docker_sub" directory should be located at "/home/tacas22", and not as "~/artifact/docker_sub"). Run the following: ,---- | sudo dpkg -i /home/tacas22/docker_sub/packages/ubuntu_apps/*.deb | pip3 install ~/docker_sub/packages/python_deps/* `---- 2 What does this artifact contain? ================================== 1. The lsharp and learnlib executables, 2. The experiment models (in the `experiment_models` directory, mentioned later), 3. script used to execute the benchmarks and plot the results, 4. source-code in the (in the `source_code` directory). Unfortunately, we could not manage to compile the program in the VM as Ubuntu 20.04 does not provide the latest (stable) Rust compiler (v1.56), therefore we have built the program ourselves, and provided that instead. We do, however, provide the source code for the program in the `source_code` directory (note that compiling it will require an internet connection, which is not needed for running the experiments.) 3 Running experiments. ====================== Run the following: ,---- | cd ~/docker_sub/automata-lib | chmod +x run_me.sh | ./run_me.sh NUM-TO-REPEAT # "./run_me.sh 2" runs each experiment twice. `---- You can run the experiments by running the script "./run_me.sh ". In the artifact, by default we repeat each experiment 10 times; in the paper we ran each experiment 100 times, but that might take too long. At a minimum, we recommend repeating each experiment twice (i.e, supply 2 as the argument): if the experiments are not repeated, we have only have one sample of each model/algorithm combination, resulting in the standard deviation being zero. When the script is finished, there will be two PDF plots generated (in the current directory) which should follow the same trend as that of the plots in the paper. The file "Total Learning.pdf" corresponds to Fig. 4(a) on page 15, and "Total.pdf" corresponds to Fig. 4(b) on the same page. Due to the random component in equivalence checking, some variations in the results are expected.