Experimental Results for the PRICAI 2023 Paper ``Detecting AI Planning Modelling Mistakes -- Potential Errors and Benchmark Domains''
Description
This repository contains data related to the following paper:
An overview of which domains have been tested on which software, refer to the following paper:
@InProceedings{Sleath2023PossibleModelingErrors,
author = {Kayleigh Sleath and Pascal Bercher},
title = {Detecting AI Planning Modelling Mistakes -- Potential Errors and Benchmark Domains},
booktitle = {Proceedings of the 20th Pacific Rim International Conference on Artificial Intelligence (PRICAI 2023)},
year = {2023},
publisher = {Springer}
}
Specifically, it contains:
- the benchmarks used in the paper.
- screenshots of the output of call commands to the respective software artifacts
(the actual call commands are only contained in some of them)
- excel tables collecting the results
Please note:
(1) These benchmark problems are just copied in here for the sake of completeness and transparency. But if you are actually interested in using them, please use the newest version, which might have corrections and additional test cases. You find it here: https://github.com/ProfDrChaos/flawedPlanningModels
(2) Please also note that the screenshots of our tests might not perfectly fit the folder structure of our benchmarks because we made some minor restructurings after the paper submission. Specifically, some test cases that we classified as syntactical in the submission were then changed into a semantic ones for the camera-ready version. (Hence the paths in the screenshots or tables might be slightly different.)
(3) Also note that by the time you read this, several errors undetected by these systems at the time of the evaluation might be resolved by now. (We included the version numbers of the tested software.)
Files
empirical_data.zip
Files
(5.3 MB)
Name | Size | Download all |
---|---|---|
md5:be7a8b1e122107b1830ff3859dfb0ecd
|
5.3 MB | Preview Download |
md5:d729b250cb80c13e4711005759189811
|
1.7 kB | Preview Download |