CAV 2021 Artifact Evaluation ============================ This archive contains an instance of LabPal, an environment for running experiments on a computer and collecting their results in a user-friendly way. The author of this archive has set up a set of experiments, which typically involve running scripts on input data, processing their results and displaying them in tables and plots. LabPal is a library that wraps around these experiments and displays them in an easy-to-use web interface. The principle behind LabPal is that all the necessary code, libraries and input data should be bundled within a single self-contained JAR file, such that anyone can download and easily reproduce someone else's experiments. All the plots and other data values mentioned in the paper are automatically generated by the execution of this lab. The lab also provides additional tables and plots that could not fit into the manuscript. Detailed instructions can be found on the LabPal website, [https://liflab.github.io/labpal] Running LabPal -------------- To start the lab, open a terminal window and type at the command line: java -jar petitpoucet-circuit-lab.jar --autostart You should see something like this: LabPal 2.8 - A versatile environment for running experiments (C) 2014-2017 Laboratoire d'informatique formelle Université du Québec à Chicoutimi, Canada Please visit http://localhost:21212/index to run this lab Hit Ctrl+C in this window to stop Open the web browser (either Midori or TazWeb, found in the "Internet" menu), and type `http://localhost:21212/index` in the address bar. This should lead you to the main page of LabPal's web control panel. Using the web interface ----------------------- A detailed explanation on the use of the LabPal web interface can be found in this YouTube video: https://www.youtube.com/watch?v=5uL7i6SytyM. A lab is made of a set of *experiments*, each corresponding to a specific set of instructions that runs and generates a subset of all the benchmark's results. Results from experiments are collected and processed into various auto-generated tables and plots. The lab is instructed to immediately start running all the expermients it contains. You can follow the progress of these experiments by going to the Status page and refreshing it periodically. At any point, you can look at the results of the experiments that have run so far. You can do so by: - Going to the Plots (5th button in the top menu) or the Tables (6th button) page and see the plots and tables created for this lab being updated in real time - Going back to the list of experiments, clicking on one of them and getting the detailed description and data points that this experiment has generated Once the assistant is done, you can export any of the plots and tables to a file, or the raw data points by using the Export button in the Status page. Comparing results from the paper -------------------------------- An interesting feature of LabPal, described in this other YouTube video (https://www.youtube.com/watch?v=StXflS52h4s), is that it exports its results directly into a research paper. If you look at the PDF of the paper, you will see that the plots and some other elements in the text are hyperlinks. These links can be used to fetch the corresponding plot or data element inside the running LabPal instance. For example, locate Figure 6 and hover your mouse over it. You should see that this plot has a hyperlink with text "P4.0". Copy that link, and then go to the LabPal console in the browser and click on the "Find" button (rightmost button in the top bar). Paste the text "P4.0" in the search bar and click on "Find". You should be taken directly to the plot that corresponds to Figure 6 in the paper, and visually compare the two. (Make sure that the lab has finished running before making this comparison, otherwise what you will see is a partial plot with whatever results have been generated so far.) You can check for other elements in the paper with similar hyperlinks. For example, on page 18, the value "15 MB" is also a hyperlink ("M8.0"). This corresponds to a value that was computed in the lab and inserted directly into the text in the form of a macro. Search for M8.0 in the lab using the same technique as above; you should be taken to the Macros page, where the value that appears in the paper is highlighted (and hopefully, should be the same!). All parts of the paper that refer to experimental data are linked to the lab in such a way, making it possible to cross-check all the claims referring to this data. Inspecting source code ---------------------- The source code for the lab *and* the source code of the underlying library that is being benchmarked have both been copied in the home folder, respectively as `petitpoucet-circuit-lab` and `petitpoucet`. Both are direct images of the corresponding GitHub repositories: - https://github.com/liflab/petitpoucet-circuit-lab - https://github.com/liflab/petitpoucet You may find more convenient to browse the code from there instead of inside the VM. The features of most interest in this code are the following. - `petitpoucet-circuit-lab/Source/src/circuitlab/MainLab.java`: the main class that runs the whole benchmark. This class instantiates all the experiments, the tables and the plots that are generated from them, as well as all the macros that compute values from these results. - `petitpoucet-circuit-lab/Source/src/circuitlab/circuits`: this folder contains the definition of the four functions that are used in the benchmark and mentioned in the paper - `petitpoucet-circuit-lab/Source/src/circuitlab/CircuitExperiment.java`: the class that actually performs the experiments. Its method execute() contains the instructions that evaluate a function, measure memory consumption and execution time, etc. - `petitpoucet/Source/src/examples`: a set of other examples of simple functions (not included in the paper) that showcase the use of the PetitPoucet library Both repos have Ant build scripts, but the VM provided for artifact evaluation is not setup to build the benchmark from sources (it is missing Ant and a Java compiler). You should have no trouble trying it on your own machine, provided you have Ant and a recent JDK. 2021-04-24