Presentation Open Access

Enabling reproducible ML and Systems research: the good, the bad, and the ugly

Grigori Fursin


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Grigori Fursin</dc:creator>
  <dc:date>2020-08-28</dc:date>
  <dc:description>Invited talk at FastPath 2020 (International Workshop on Performance Analysis of Machine Learning Systems) co-located with ISPASS 2020.


	Article: arxiv.org/pdf/2011.01149.pdf  ( code and data )
	Reproducibility initiative: systems and ML conferences ( reproduced papers and results )
	Workshop program: fastpath2020.github.io/Program
	Author: Grigori Fursin


Abstract:

10 years ago we released our ML-based MILEPOST compiler with all related code and experimental data at cTuning.org. Unfortunately, this research quickly stalled after we struggled to reproduce performance results and predictive models shared by volunteers across rapidly changing systems.

In this talk, I will describe my 10-year effort to solve numerous reproducibility issues in ML&amp;systems research. I will share my experience reproducing 150+ systems and ML papers during artifact evaluation at ASPLOS, MLSys, CGO, PPoPP and Supercomputing. This tedious experience motivated me to develop the Collective Knowledge framework and the open cKnowledge.io portal to bring DevOps principles to our research. I will also present cKnowledge solutions - a new way to package and share research artifacts and results with common Python APIs, CLI actions, portable workflows and JSON meta descriptions. Such solutions can be used to automatically build, benchmark and validate ML&amp;system experiments across continuously evolving platforms.

I will conclude with several practical use-cases of our technology in collaboration with Arm, IBM, General Motors, the Raspberry Pi foundation and MLPerf. Our long-term goal is to help researchers share their new ML techniques as production-ready packages along with published papers and participate in collaborative and reproducible benchmarking, co-design and comparison of efficient ML/software/hardware stacks.

 </dc:description>
  <dc:identifier>https://zenodo.org/record/4005773</dc:identifier>
  <dc:identifier>10.5281/zenodo.4005773</dc:identifier>
  <dc:identifier>oai:zenodo.org:4005773</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>doi:10.5281/zenodo.4005587</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/ck</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>reproducibility</dc:subject>
  <dc:subject>reusability</dc:subject>
  <dc:subject>portability</dc:subject>
  <dc:subject>workflow</dc:subject>
  <dc:subject>automation</dc:subject>
  <dc:subject>machine learning</dc:subject>
  <dc:subject>systems</dc:subject>
  <dc:subject>benchmarking</dc:subject>
  <dc:subject>artifact evaluation</dc:subject>
  <dc:subject>fair principles</dc:subject>
  <dc:subject>open science</dc:subject>
  <dc:title>Enabling reproducible ML and Systems research: the good, the bad, and the ugly</dc:title>
  <dc:type>info:eu-repo/semantics/lecture</dc:type>
  <dc:type>presentation</dc:type>
</oai_dc:dc>
4,890
3,450
views
downloads
All versions This version
Views 4,8904,853
Downloads 3,4503,430
Data volume 14.8 GB14.8 GB
Unique views 4,1864,161
Unique downloads 3,1663,153

Share

Cite as