Presentation Open Access

Collective Knowledge (CK): an open-source framework to automate, reproduce, and crowdsource HPC experiments

Grigori Fursin

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.2556147</identifier>
      <creatorName>Grigori Fursin</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0001-7719-1624</nameIdentifier>
      <affiliation>cTuning foundation, dividiti</affiliation>
    <title>Collective Knowledge (CK): an open-source framework to automate, reproduce, and crowdsource HPC experiments</title>
    <subject>experiment automation</subject>
    <subject>collaborative research</subject>
    <subject>reproducible research</subject>
    <subject>open science</subject>
    <subject>Collective Knowledge</subject>
    <subject>crowdsource experiments</subject>
    <subject>research API</subject>
    <subject>adaptive workflows</subject>
    <subject>portable workflows</subject>
    <date dateType="Issued">2019-02-03</date>
  <resourceType resourceTypeGeneral="Text">Presentation</resourceType>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.2556146</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf"></relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Validating experimental results from articles has finally become a norm at many HPC and systems conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon.&lt;/p&gt;

&lt;p&gt;While overviewing more than 100 papers during artifact evaluation at HPC conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data sets, environments and platforms. This motivated us to develop Collective Knowledge (CK), an open workflow framework with a unified Python API to automate common researchers&amp;rsquo; tasks such as detecting software and hardware dependencies, installing missing packages, downloading data sets and models, compiling and running programs, performing autotuning and co-design, crowdsourcing time-consuming experiments across computing resources provided by volunteers similar to SETI@home, reproducing results, automatically generating interactive articles, and so on: .&lt;/p&gt;

&lt;p&gt;In this talk I will introduce CK concepts and present several real world use cases from the Raspberry Pi foundation, ACM, General Motors, Amazon and Arm on collaborative benchmarking, autotuning and co-design of efficient software/hardware stacks for emerging workloads including deep learning. I will also present our latest initiative to create an open repository of reusable research components and workflows at HPC conferences. We plan to use it to automate the Student Cluster Competition Reproducibility Challenge at the Supercomputing conference.&lt;/p&gt;</description>
    <description descriptionType="Other">Presentation at FOSDEM'19 (HPC, Big Data and Data Science):</description>
All versions This version
Views 789790
Downloads 258258
Data volume 926.8 MB926.8 MB
Unique views 758759
Unique downloads 240240


Cite as