Presentation Open Access

Enabling reproducible ML and Systems research: the good, the bad, and the ugly

Grigori Fursin

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Grigori Fursin</dc:creator>
  <dc:description>Invited talk at FastPath 2020 (International Workshop on Performance Analysis of Machine Learning Systems) co-located with ISPASS 2020.

	Article:  ( code and data )
	Reproducibility initiative: systems and ML conferences ( reproduced papers and results )
	Workshop program:
	Author: Grigori Fursin


10 years ago we released our ML-based MILEPOST compiler with all related code and experimental data at Unfortunately, this research quickly stalled after we struggled to reproduce performance results and predictive models shared by volunteers across rapidly changing systems.

In this talk, I will describe my 10-year effort to solve numerous reproducibility issues in ML&amp;systems research. I will share my experience reproducing 150+ systems and ML papers during artifact evaluation at ASPLOS, MLSys, CGO, PPoPP and Supercomputing. This tedious experience motivated me to develop the Collective Knowledge framework and the open portal to bring DevOps principles to our research. I will also present cKnowledge solutions - a new way to package and share research artifacts and results with common Python APIs, CLI actions, portable workflows and JSON meta descriptions. Such solutions can be used to automatically build, benchmark and validate ML&amp;system experiments across continuously evolving platforms.

I will conclude with several practical use-cases of our technology in collaboration with Arm, IBM, General Motors, the Raspberry Pi foundation and MLPerf. Our long-term goal is to help researchers share their new ML techniques as production-ready packages along with published papers and participate in collaborative and reproducible benchmarking, co-design and comparison of efficient ML/software/hardware stacks.

  <dc:subject>machine learning</dc:subject>
  <dc:subject>artifact evaluation</dc:subject>
  <dc:subject>fair principles</dc:subject>
  <dc:subject>open science</dc:subject>
  <dc:title>Enabling reproducible ML and Systems research: the good, the bad, and the ugly</dc:title>
All versions This version
Views 4,8904,853
Downloads 3,4503,430
Data volume 14.8 GB14.8 GB
Unique views 4,1864,161
Unique downloads 3,1663,153


Cite as