Sponsors and supporters
Feel free to get in touch
if you would like to sponsor this important non-profit community service
(evaluation, prizes, development of supporting tools)!
Upcoming Artifact Evaluations (AE) and related events
Our initiatives to unify and automate AE
Recently completed Artifact Evaluation
|
IA3 2018 at SC 2018.
|
|
ACM ReQuEST at ASPLOS 2018 -
1st open tournament on reproducible and Pareto-efficient SW/HW co-design of deep learning (speed,accuracy,energy,costs)
with automated Artifact Evaluation.
Final results are available at the live ReQuEST scoreboard.
See accepted artifacts here.
|
|
PPoPP 2018
(using new ACM Artifact Review and Badging policy which we co-authored last year)
- see accepted artifacts here.
|
|
CGO 2018
(using new ACM Artifact Review and Badging policy which we co-authored last year)
- see accepted artifacts here.
|
|
SC 2017 (based on our AE appendix)
|
|
IA3 2017 at SC 2017.
dividiti award for portable and reusable artifact shared in the Collective Knowledge Format:
"Optimizing Word2Vec Performance on Multicore Systems"
Vasudevan Rengasamy, Tao-Yang Fu, Wang-Chien Lee and Kamesh Madduri
[ CK workflow at GitHub ]
|
|
CGO 2017 -
see accepted artifacts here.
Distinguished artifact implemented using the Collective Knowledge Framework
(reusable and customizable workflow with automatic cross-platform software installation and web-based experimental dashboard):
"Software Prefetching for Indirect Memory Accesses", Sam Ainsworth and Timothy M. Jones
[ GitHub , Paper with AE appendix and CK workflow , PDF snapshot of the interactive CK dashboard , CK concepts ]
($500 from dividiti)
|
|
PACT 2017 -
see accepted artifacts here.
|
|
PPoPP 2017 -
see accepted artifacts here.
Distinguished artifact: "Understanding the GPU Microarchitecture to Achieve Bare-Metal Performance Tuning",
Xiuxia Zhang, Guangming Tan, Shuangbai Xue, Jiajia Li, Mingyu Chen
(NVIDIA Pascal Titan X GPGPU card presented by Steve Keckler from NVIDIA)
|
|
PACT 2016 -
see accepted artifacts here.
Highest ranked artifact: "Fusion of Parallel Array Operations", Mads R. B. Kristensen, James Avery, Simon Andreas Frimann Lund and Troels Blum
(NVIDIA GPGPU)
|
|
ADAPT 2016 -
see open reviewing and discussions via Reddit,
all accepted artifacts validated by the community
here, and the motivation for
our open reviewing and publication model .
Highest ranked artifact with customizable workflow implemented using CK:
"Integrating a large-scale testing campaign in the CK framework", Andrei Lascu, Alastair F. Donaldson
(award by dividiti)
|
[ See all prior AE here ]
|
|
Recent events
-
August 2018 - We organize a dedicated workshop on customizable, portable and reproducible workflows for HPC (ResCuE-HPC) at Supercomputing'18.
-
June 2018 - Proceedings, CK workflows and report from the 1st ACM ReQuEST-ASPLOS'18 tournament
to co-design efficient SW/HW stack for deep learning are now available online:
ACM DL,
report,
CK workflows.
-
May 2018 - We opened a dedicated AE google group with many past AE chairs and organizers.
-
January 2018 - See interactive report about CK workflow for collaborative research into multi-objective autotuning and machine learning techniques (funded by Raspberry Pi foundation).
-
November 2017 - Our 1st open tournament on reproducible AI/SW/HW co-design (speed, accuracy, costs) was accepted for ASPLOS'18!
- November 2017 - We started sharing various reusable AI artifacts
for TensorFlow,
Caffe,
Caffe2,
MXNet
in the CK format with JSON API
to help researchers prototype AI-related workflows
and optimize them across diverse platforms [ browse ].
- October 2017 - The following artifact and workflow
from the IA3 2017 workshop
(co-located with SuperComputing'17)
were shared as customizable and reusable components
in the Collective Knowledge Format.
- 14 April 2017 - We synchronized our submission and reviewing guides with the
new ACM policy
which we co-authored in 2016.
- 19 February 2017 - Notes (slides) from the CGO/PPoPP'17 AE discussion session on how to improve and scale future AE are available here.
- 6 February 2017 - CGO-PPoPP'17 discussion session agenda.
- 2 February 2017 - we started preparing Caffe (deep learning framework) for community-driven optimization across Linux, Windows and Android platforms: wiki.
- 1 February 2017 Michel Steuwer (University of Edinburgh) blogged about CK concepts.
- 18 November - PACT'17 AE announced.
- 14 September 2016 - Collective Knowledge V1.8.1 was released.
Please check new documentation
("Getting Started Guide"
and "Portable Workflows"
to know how you can share your artifacts, workflows and results
as reusable components with unified JSON API, meta and UID (see how General Motors uses CK to collaboratively benchmark
Caffe or how ARM uses CK to crowdsource optimization of realistic workloads)!
- 9 August 2016 - PACT'16 artifact evaluation was successfully completed!
- 10 July 2016 - New Android app to crowdsource experiments was released!
- 4 May 2016 - we participated in the ACM workshop on reproducibility to unify artifact evaluation across various SIGs,
and updated our artifact submission/reviewing procedures!
We also helped prepare an ACM Result and Artifact Review and Badging policy.
- 20 March 2016 - our Dagstuhl report on Artifact Evaluation for Publications (Bruce R. Childers, Grigori Fursin, Shriram Krishnamurthi and Andreas Zeller)
is now available online.
- March 2016 - we got preliminary approval from ACM to let the authors
add up to 2 pages of their AE appendix to the camera ready paper.
- 14 March 2016 (Monday, 18:00-18:30) - we arranged a public AE discussion (results, distinguished artifact award, issues, future work).
- 14-16 March 2016 - we held several demo sessions at CGO/PPoPP'16 showing how
open-source Collective Knowledge Technology can help
solve various encountered issues during Artifact Evaluation. CK is a small, portable and customizable research platform
to share artifacts as customizable and reusable components with JSON API, quickly prototype experimental workflows (such as multi-objective autotuning)
from shared components; crowdsource and reproduce experiments; apply predictive analytics; enable interactive and reproducible articles.
You can see some latest crowd-results here.
- September 2015 - we introduced submission
and reviewing templates to start formalizing
and simplifying AE process (see these templates
at GitHub
and feel free to suggest improvements).
[ Our history to enable reproducible systems research and Artifact Evaluation (2008-cur.) ]
[ Vision paper 1 (2009) ]
[ Vision paper 2 (2014) ]
Motivation
Reproducing experimental results from computer systems papers
and building upon them is becoming extremely challenging and time consuming.
Major issues include ever changing and possibly proprietary software
and hardware, lack of common tools and interfaces, stochastic
behavior of computer systems, lack of common experimental methodology,
and lack of universally accepted mechanisms for knowledge exchange
[ 1,
2 ].
We are organizing Artifact Evaluation to help authors validate their
techniques and tools by independent
reviewers - please check out "submission"
and "reviewing"
guidelines for further details. Papers that successfully pass Artifact Evaluation
process receive a seal of approval printed on the papers themselves (we discuss with
ACM how to unify this stamp and include it directly
to Digital Library).
Authors are also invited (but not obliged) to share their artifacts
along with their publications, for example as a supplementary material in Digital
Libraries.
We hope that this initiative will help make artifacts as important
as papers while gradually solving numerous reproducibility issues in
our research.

We consider Artifact Evaluation as a continuous learning curve -
our eventual goal is to collaboratively develop common methodology
for artifact sharing and reproducible experimentation
in computer system's research. Your feedback is essential to
make it happen! If you have any questions, comments or suggestions,
do not hesitate to get in touch,
participate in public discussions (LinkedIn,
wiki,
mailing list),
submit patches for the artifact
templates at GitHub, join us at related events,
and check out our supporting technology (OCCAM,
Collective Knowledge, CK-WA).
We would like to thank Prof. Shriram Krishnamurthi
and all our colleagues
for very fruitful discussions and feedback!