Recommendations for discipline-specific FAIRness evaluation derived from applying an ensemble of evaluation tools
From a research data repositories’ perspective, offering research data management services in-line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC). Two approaches are purely automatic, two approaches are purely manual and one approach applies a hybrid method (manual and automatic combined).
The results of our evaluation show an overall mean FAIR score of WDCC-archived (meta)data of 0.67 of 1., with a range of 0.5 to 0.88. Manual approaches show higher scores than automated ones and the hybrid approach shows the highest score. Computed statistics indicate that the test approaches show an overall good agreement at the data collection level.
We find that while neither one of the five valuation approaches is fully fit-for-purpose to evaluate (discipline-specific) FAIRness, all have their individual strengths. Specifically, manual approaches capture contextual aspects of FAIRness relevant for reuse, whereas automated approaches focus on the strictly standardized aspects of machine actionability. Correspondingly, the hybrid method combines the advantages and eliminates the deficiencies of manual and automatic evaluation approaches.
Based on our results, we recommend future FAIRness evaluation tools to be based on a mature hybrid approach. Especially the design and adoption of the discipline-specific aspects of FAIRness will have to be conducted in concerted community efforts.