Published October 29, 2018 | Version v1
Presentation Open

Reproducible big data science: A case study in continuous FAIRness

Description

Big biomedical data create exciting opportunities for discovery but make it difficult to capture analyses and outputs in forms that are findable, accessible, interoperable, and reusable (FAIR). In response, we describe tools that make it easy to capture, and assign identifiers to, data and code throughout the data lifecycle. We illustrate the use of these tools via a case study involving a multi- step analysis that creates an atlas of putative transcription factor binding sites from terabytes of ENCODE DNase I hypersensitive sites sequencing data. We show how the tools automate routine but complex tasks, capture analysis algorithms in understandable and reusable forms, and harness fast networks and powerful cloud computers to process data rapidly, all without sacrificing usability or reproducibility--thus ensuring that big data are not hard-to-(re)use data.

In this talk, we will describe the enhancements made to the Galaxy framework to support working with datasets referred to by minids, support analyzing BagIt-based research objects called BDBags, execution using software encapsulated using docker containers with unique identifiers. We will describe the tools, services developed to create an end-to-end reproducible analysis pipelines while adhering to FAIR principles.

Notes

Accepted talk at RO2018

Files

S02E01-Kyle Chard-ContinuousFAIRness.pdf

Files (1.6 MB)

Name Size Download all
md5:3df04deb869459976329d9773eb57bbf
1.6 MB Preview Download

Additional details