Published December 5, 2016 | Version v1
Conference paper Open

I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets

  • 1. The University of Chicago and Argonne National Laboratory, Chicago IL, USA
  • 2. University of Southern California, Los Angeles, CA, USA
  • 3. Institute for Systems Biology, Seattle, WA, USA
  • 4. The University of Manchester, Manchester, UK
  • 5. The University of Michigan, Ann Arbor, MI, USA

Contributors

  • 1. The University of Chicago

Description

Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified.

We address these issues by proposing simple methods and tools for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing.

We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.

Files

bagminid.pdf

Files (713.2 kB)

Name Size Download all
md5:91195ab648922564b86d629e83ea88d8
713.2 kB Preview Download

Additional details

Funding

BioExcel – Centre of Excellence for Biomolecular Research 675728
European Commission