Load the file provided via docker load < 84.tar.gz
. This may take
some time as Docker will unpack the zip file before loading the
image. The Docker image can now be started as with any other Docker
image; for example, docker run -it 84 bin/bash
will start up an
interactive terminal.
This image comes with Granule 0.8.2.0 pre-installed, so no additional software
needs to be downloaded or installed. The source code files for the paper are in
the granule
directory, named examples.gr
(which contains the various
examples used throughout the paper) and benchUnique.gr
/benchNonUnique.gr
(for running the benchmarks to replicate our evaluation).
Once inside the docker container, change directory via
cd granule
In order to test that the Granule installation works correctly, the
collection of examples can be type checked and run via gr examples.gr
. A successful run of this file, printing Ok, evaluating...
and then the unit value ()
should suffice to
demonstrate that no technical issues have occurred with setting up the
Docker image.
The reviewer may feel free to modify this file if they wish to
experiment with the various ideas from the paper as they follow along;
the text editor nano
is installed, or if another option would be
preferred it can be installed using apt
or otherwise. In order to
experiment with the code more interactively, grepl
will load Granule's
interactive environment, which behaves in much the same way as ghci
for those who are familiar. For instance, :l examples.gr
will load
in the file in the usual way. The reviewer can then, for example,
inspect the types of functions via :t
, e.g.
$:/granule# grepl
Welcome to Granule interactive mode (grepl). Version 0.8.2.0
Granule> :l examples.gr
granule/examples.gr, checked.
Granule> firstChar
'#'
Granule> :t uniqueReturn
uniqueReturn : forall {a : Type} . *a -> !a
Granule> :t uniqueBind
uniqueBind : forall {a : Type, b : Type} . (*a -> !b) -> !a -> !b
To quit out of the interactive environment, use :q
, as in ghci
.
You can follow the paper, and all Granule examples are included in
examples.gr
with comments pointing to where in the paper they
appear and an example of how to run that example in grepl
.
A copy of the examples.gr file is also available here:
https://gist.github.com/dorchard/4b0df8e3369e8b7a04fbf4097f0d23ec
as it may be easier to follow the file in an external editor and
run examples via grepl
in the Docker container.
You are welcome to examine the Granule code in depth in order to see how the calculus from the paper is implemented, though this is not necessary for replicating our results. Some files of particular interest may be:
granule/frontend/src/Language/Granule/Checker/Primitives.hs - here we define the
Guarantee
kind and the Unique
guarantee as the required third flavour of
modality for representing uniqueness, the primitives uniqueReturn
and
uniqueBind
which map onto the Borrow
and Copy
rules from LCU, and also the
various primitives for handling mutable/immutable arrays (newFloatArray
,
readFloatArrayI
, etc.)
granule/frontend/src/Language/Granule/Syntax/Parser.y - here we define the
various syntactic sugar for allowing the programmer to work with uniqueness more
easily, including *a
to represent a unique value of type a
, &
for
borrowing, clone
for copying, and #
for necessitation.
granule/interpreter/src/Language/Granule/Interpreter/Eval.hs - this is where the evaluation for the various primitives happens. If you are interested you can follow the various types through to the typechecker in order to see how e.g. unique values or non-linear values are represented inside Granule.
granule/runtime/src/Language/Granule/Runtime.hs - once the Granule is compiled
to Haskell, this file displays how array operations are handled for both mutable
and immutable arrays. Notice that immutable arrays are represented as ordinary
Haskell arrays, while we can represent mutable arrays as PointerArray
s and
safely update them destructively as described in the paper.
To replicate the performance results reported in Figure 3 of the paper that compares overall runtime and GC overhead of unique and non-unique arrays, we have included the Granule code and the following instructions. Timing results from individual benchmark runs are obtained using GHC's RTS options.
Each individual run of the benchmark should not take more than a few seconds.
To compile the benchmark code, run all the benchmarks, and generate data
(corresponding to the plot in Figure 3), first make sure you are in the
granule
directory, then run the runbench.sh
script.
$:/granule# ./runbench.sh
This script compiles all the benchmarking code and runs the benchmarks. You should see something like the following in your terminal:
*** Compiling
Checking ./benchUnique.gr...
Ok, compiling...
Writing ./benchUnique.hs
[1 of 1] Compiling Main ( benchUnique.hs, benchUnique.o )
Linking benchUnique ...
Checking ./benchNonUnique.gr...
Ok, compiling...
Writing ./benchNonUnique.hs
[1 of 1] Compiling Main ( benchNonUnique.hs, benchNonUnique.o )
Linking benchNonUnique ...
*** Running benchmarks
Overall runtime (1/2)
# non-unique unique
100 138.000 89.000
200 281.000 173.000
300 414.000 264.000
400 572.000 347.000
500 706.000 434.000
GC overhead (2/2)
# non-unique unique
100 37.000 6.000
200 70.000 11.000
300 107.000 15.000
400 148.000 20.000
500 180.000 26.000
You can inspect and/or edit the contents of benchUnique.gr
and
benchNonUnique.gr
if you like. Use the following steps to manually re-compile
and run the benchmarks with a specific iteration count:
Make sure you are in the granule
directory.
Compile the benchmark file into a Haskell module using the
Granule-to-Haskell compiler:
grc ./benchUnique.gr
Compile the resulting Haskell code to native code with GHC:
stack exec -- ghc ./benchUnique.hs
Run the resulting benchmark (where ITER
is the iteration count number):
echo ITER ./benchUnique +RTS -s
This should print some data to the terminal about various details of memory
usage and elapsed time. In particular, we are interested in the Total time
and the GC time
.
For example, for 100 iterations and unique arrays, you might get an output like this:
INIT time 0.000s ( 0.006s elapsed)
MUT time 0.067s ( 0.067s elapsed)
GC time 0.002s ( 0.002s elapsed)
EXIT time 0.000s ( 0.003s elapsed)
Total time 0.069s ( 0.078s elapsed)
The exact numbers will vary on different machines and across multiple runs. In general, however, you should be able to confirm that the version with unique arrays has shorter total time overall and significantly less GC time than the non-unique version.
Here we will include the data used to generate the plots in Figure 3. The numbers correspond to time in ms, which you can use to compare with the results you get by running the benchmarks on your machine.
First, the "Overall runtime" from Fig. 3:
# non-unique unique
100 145 78
200 290 160
300 404 223
400 533 283
500 679 357
Second, the "GC overhead" from Fig. 3:
# non-unique unique
100 42 6
200 82 11
300 116 16
400 155 20
500 196 26