Getting Research Software to Work: A Case Study on Artifact Evaluation for OOPSLA 2019
Description
Due to new peer-review programs, researchers in certain fields can now receive badges on their papers that reward them for writing functional and reusable research code. These badges in turn make their research more attractive for
others to cite and build upon. Unfortunately, some submissions to these new programs do not pass the lowest bar, and many submissions are difficult for reviewers to simply setup and test. To understand how to improve submissions and how to help researchers gain badges, we studied the artifact evaluation process of OOPSLA 2019, an ACM conference on the analysis and design of computer programs. Based on reviewer experiences, we highlight best practices and we discuss whether guidelines, tools, or larger cooperative efforts are required to achieve them. To conclude, we present ongoing and future work that helps researchers share and use research code.
Files
accpub-OOPSLA2019-licensed.pdf
Files
(130.5 kB)
Name | Size | Download all |
---|---|---|
md5:ed09ff3943e088d40c3548b56d93976b
|
130.5 kB | Preview Download |