Quick setup guide for evaluating RESL submission and understanding its components.
username: resl password: reslresl
bench
. This will run 4 benchmarks from each of the empirical benchmark sets, and will take approximately 15 minutes.resl
to start the synthesis engine.http://localhost:3000
.420
, Output: ['4','42','420']
2020
, Output: ['2','20','202','2020']
[1,input.toString().length].map(i => input.toString().slice(0,i))
in the text bar at the bottom and hit enter.[1,input.toString().length]
, right-click, and choose "fix this" to make a hole.input.toString().length
, right click and choose "retain" to retain this subexpression.Synthesize
button at the bottom right corner, and wait for the result to be displayed.List of claims in the paper supported by the artifact:
List of claims not supported by the artifact:
Note: this VM runs the single-threaded version of the synthesizer rather than the multi-threaded version that ran on AWS. This should not affect the results of Section 8 as they do not include running times, but may cause using RESL through the Arena to fail to synthesize some more complex programs.
From directory ~Desktop/
run benchpred
.
Script will output Table 1 in CSV format, excluding columns 2 (difficulty) and 3(completed online) which originate on each task's webpage. In column 4 (h(r)) in Table 1, height of programs is reported as zero-based (i.e., the height of a tree with only a literal is 0) whereas in our codebase the height is one-based (the height of a literal is 1). This means that 1 needs to be subtracted from the values in the table outputted by benchpred
in this column to match the values in Table 1.
From directory ~Desktop/
run benchsketch
.
Comparing these results to Table 2 is demonstrated with the following output:
plusOneTimesTen
new context size=2
|E|=3
|ext(E)|=9
input.map((e,i) => 10 + (10 * e))
outside equiv classes=412
inside equiv classes=412
#subtrees=5
discarded subtrees={}
all subtrees={10,e,10 * e}
-----
10 + (10 * e)
. Column 4 of the table, terms(res), is derived from this subexpression, and counts the number of nodes in the AST, in this case 5 (10, ?+?, 10, ?*?, e). This also derives the structure of the tree in the last column.You can use the RESL Arena (section 5 in the paper) to run synthesis queries. For example, you can try the tasks from the user study (section 9)
To run RESL, open an interactive shell and use the following commands:
resl
You can try to solve the programming tasks we introduced in our user study:
To run all benchmarks (~90 minutes):
benchall
To run Predicate / Sketch benchmark only (~45 minutes each):
benchpred
benchsketch
The code for the synthesis engine is in the ~/Desktop/js-gentest
directory. It is built with sbt. Below is a brief overview of the components.
To recompile the project's sources:
rebuild
To run the project's unit tests after modifications:
cd ~/Desktop/js-gentest; sbt test
RESL consists of 2 parts:
js-gentest: Synthesis logic, including benchmark calculation and synthesis server.
The code resides in ~/Desktop/js-gentest/src/main/scala
.
Synthesis server's main function is in ~/Desktop/js-gentest/src/main/scala/Server/SynthesisServer.scala
.
Predicate benchmark's main function is in ~/Desktop/js-gentest/src/test/scala/Benchmark/PredicateBenchmarkDriver.scala
.
Sketch benchmark's main function is in ~/Desktop/js-gentest/src/test/scala/Benchmark/SketchBenchmarksDriver.scala
.
resl-ui: Web interface for the RESL Arena.
The code resides in ~/Desktop/resl-ui/{public,src}
.
Presented page's HTML is in ~/Desktop/resl-ui/public/index.html
.
RESL's functionality is in ~/Desktop/resl-ui/src/index.js
.