Planned intervention: On Thursday March 28th 07:00 UTC Zenodo will be unavailable for up to 5 minutes to perform a database upgrade.
Published December 20, 2022 | Version v1
Project deliverable Open

FAIR Assessment Tools: Towards an "Apples to Apples" Comparisons

  • 1. Co-Chair, EOSC Task Force on FAIR Metrics and Data Quality; Departamento de Biotecnología-Biología Vegetal, Escuela Técnica Superior de Ingeniería Agronómica, Alimentaria y de Biosistemas, Centro de Biotecnología y Genómica de Plantas. Universidad Politécnica de Madrid (UPM) - Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria-CSIC (INIA-CSIC).
  • 2. Member, EOSC Task Force on FAIR Metrics and Data Quality; Oxford e-Research Centre, Department of Engineering Science, University of Oxford,
  • 3. Member, EOSC Task Force on FAIR Metrics and Data Quality; Data Archiving and Networked Services (KNAW-DANS)
  • 4. Member, EOSC Task Force on FAIR Metrics and Data Quality; CSC - IT Center for Science
  • 5. Member, EOSC Task Force on FAIR Metrics and Data Quality; Novo Nordisk Foundation Center for Stem Cell Medicine - reNEW, University of Copenhagen
  • 6. Member, EOSC Task Force on FAIR Metrics and Data Quality; German Aerospace Center, Research Data Management
  • 1. University of Manchester
  • 2. Oxford e-Research Centre
  • 3. Department of Pharmacological Sciences, Icahn School of Medicine at Mount Sinai
  • 4. GO FAIR Foundation

Description

As part of the EOSC Task Force on FAIR Metrics and Data Quality, the FAIR Metrics subgroup works
to examine the uptake and application of metrics of FAIRness and the use and utility of FAIRness
evaluations. A range of FAIR assessment tools is designed to measure compliance with established
FAIR Metrics by measuring one or more Digital Objects (DO, including datasets and repositories).
Unfortunately, the same DO assessment by different tools often exhibits widely different results
because of independent interpretations of the Metrics, metadata publishing paradigms, and even
the intent of FAIR itself.
In response to this status quo, the FAIR Metrics subgroup (represented by the authors of this report)
brought together developers of several FAIR evaluation tools and enabling services (listed in the
Acknowledgements) for a series of hands-on hackathon events to identify a common approach to
metadata provision that could be implemented by all data publishers such as databases,
repositories, and data catalogue managers. This led to identifying a process for (meta)data
publishing based on well-established Web standards already in everyday use within data publishing
communities, albeit not uniformly. A specification document describing the approach and a series
of “Apples-to-Apples” (A2A1) benchmarks to evaluate compliance with this (meta)data publishing
approach were created during a series of hackathon events. The authors of FAIR evaluation tools
also began writing the code to ensure their independent tools would behave identically when
encountering these A2A benchmark environments, thus helping to ensure that data publishers
following this paradigm will be evaluated in a harmonized manner by all assessment tools;
additional considerations for assessment tool harmonization are discussed later. This report
explains the rationale for these workshop and hackathon events, outlines the outcomes and work
done, describes the current status, and then discusses desirable next steps. The authors propose
that the EOSC Association, the EOSC Task Force Long-term Data Preservation, and other groups and
projects in Europe and worldwide consider this approach to deliver accurate means to all
stakeholders that assist in achieving FAIRness of research data.

Files

Report on the FAIR Evaluation events_final_sub.pdf

Files (1.4 MB)