Published January 4, 2021 | Version v5
Software Open

Resolvable Ambiguity - CC Artifact

  • 1. KTH Royal Institute of Technology


This Docker image contains the software required to reproduce the results in the paper "Resolvable Ambiguity: Principled Resolution of Syntactically Ambiguous Programs" to be published in ACM SIGPLAN 2021 International Conference on Compiler Construction (CC 2021).

Note that as there is an element of randomness in the composition of language fragments, we include the same configuration used for Section 6, but also allow selecting new random combinations. For completeness, we also include the source code of our tool, but as it is not the main focus, it will not be as approachable as running the experiments.

Additionally, you can also find the supplementary material mentioned in the paper on the trickier cases of constructing AST-languages as a pdf below.


Docker, see


The image contains the implementation source code (under /home/syncon/syncon-parser) from and a pre-built binary (available on the PATH as syncon-parser), as well as the data used in the paper, and the runner used to produce it (under /home/data). This includes the generated data from the experiment, i.e., it is possible to regenerate exactly the figures used in the paper.

For additional information, there are README files in the image under/home/data, /home/syncon, and /home/syncon/syncon-parser.

Running the Container

To run the container, ensure that you have Docker installed (see the earlier link), then run:

docker load --input resolvable-image.tar.gz
docker run -p 8888:8888 -it --name resolvable-container resolvable-image

If you at a later point wish to start the container again you can use:

docker start -ia resolvable-container

Inside the Container - Running the Experiments

To reproduce the data in the paper run the following commands inside the container. Note that the command will print the folder in which the logs are placed (once when it starts, and once when it is finished), take note of this directory.

cd /home/data/fragments
runner reclassify-paper  # Rerun the experiment on the languages examined in the paper.
                         # This has been tested on Linux and Mac, and takes ~4h on a
                         # laptop with an Intel Core i7-8550U (4 cores) and 16 GiB RAM.

To instead run a new experiment with newly generated language compositions:

cd /home/data/fragments
runner classify-many     # Run the experiment with a new set of generated languages

Inside the Container - Examining the Results

cd /home/data
runner jupyter           # Open the Jupyter notebook used to analyze the data and generate
                         # graphs. Copy the link printed starting with 
                         # and open it in your web browser; the 'docker run' command
                         # above exposes the port outside the container.

Once inside Jupyter in your web browser, open 'Analysis.ipynb' in the file browser top-left, scroll down, and follow the instructions to add analysis of the new run. Remember to rerun everything to see the new results (in the menu, 'Run > Run All Cells').

The Docker image contains all the data from the runs used in the paper, thus the exact same data and graphs are available for comparison inside the Jupyter notebook. Since our approach is based on property based testing, which includes an element of randomness, we expect to see some variance in the classifications of the languages (see "Classification Comparisons" in the notebook). We also expect some variance in runtimes, partially due to potential hardware differences but also due to said randomness. In particular, better hardware may lead to more "ambiguous" classifications, as some additional analyses have time to finish before the timeout.

However, the rough shape of the graphs should be similar.




Files (397.9 MB)

Name Size Download all
386.5 kB Preview Download
397.5 MB Download