Presentation Open Access

Collective Knowledge (CK): an open-source framework to automate, reproduce, and crowdsource HPC experiments

Grigori Fursin


JSON-LD (schema.org) Export

{
  "inLanguage": {
    "alternateName": "eng", 
    "@type": "Language", 
    "name": "English"
  }, 
  "description": "<p>Validating experimental results from articles has finally become a norm at many HPC and systems conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon.</p>\n\n<p>While overviewing more than 100 papers during artifact evaluation at HPC conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data sets, environments and platforms. This motivated us to develop Collective Knowledge (CK), an open workflow framework with a unified Python API to automate common researchers&rsquo; tasks such as detecting software and hardware dependencies, installing missing packages, downloading data sets and models, compiling and running programs, performing autotuning and co-design, crowdsourcing time-consuming experiments across computing resources provided by volunteers similar to SETI@home, reproducing results, automatically generating interactive articles, and so on: http://cKnowledge.org .</p>\n\n<p>In this talk I will introduce CK concepts and present several real world use cases from the Raspberry Pi foundation, ACM, General Motors, Amazon and Arm on collaborative benchmarking, autotuning and co-design of efficient software/hardware stacks for emerging workloads including deep learning. I will also present our latest initiative to create an open repository of reusable research components and workflows at HPC conferences. We plan to use it to automate the Student Cluster Competition Reproducibility Challenge at the Supercomputing conference.</p>", 
  "license": "http://creativecommons.org/licenses/by/4.0/legalcode", 
  "creator": [
    {
      "affiliation": "cTuning foundation, dividiti", 
      "@id": "https://orcid.org/0000-0001-7719-1624", 
      "@type": "Person", 
      "name": "Grigori Fursin"
    }
  ], 
  "url": "https://zenodo.org/record/2556147", 
  "datePublished": "2019-02-03", 
  "keywords": [
    "experiment automation", 
    "collaborative research", 
    "reproducible research", 
    "open science", 
    "Collective Knowledge", 
    "crowdsource experiments", 
    "research API", 
    "adaptive workflows", 
    "portable workflows"
  ], 
  "@context": "https://schema.org/", 
  "identifier": "https://doi.org/10.5281/zenodo.2556147", 
  "@id": "https://doi.org/10.5281/zenodo.2556147", 
  "@type": "PresentationDigitalDocument", 
  "name": "Collective Knowledge (CK): an open-source framework to automate, reproduce, and crowdsource HPC experiments"
}
656
217
views
downloads
All versions This version
Views 656657
Downloads 217217
Data volume 779.5 MB779.5 MB
Unique views 635636
Unique downloads 201201

Share

Cite as