Published April 28, 2026 | Version v1
Poster Open

FAIR assessment tools as supporters for interdisciplinary research?

Authors/Creators

  • 1. ROR icon University of Applied Sciences Potsdam

Description

Metadata recommendations only unfold their effect when translated into concrete research practice. Within the context of the FAIR principles (Wilkinson et al. 2016), FAIR assessment tools increasingly assume this translational role by interpreting metadata into measurable indicators for FAIRness. They are frequently understood as neutral instruments for assessing data quality (Pellegrino & Tuozzo 2025). However, initial studies from infrastructure and community contexts suggest that FAIR is operationalised differently across existing assessment metrics, leading to inconsistent assessment outcomes (Gehlen et. al 2022; Devaraju & Huber 2021). These inconsistencies are mainly expressions of various communities of practice in which FAIR becomes actionable. The poster conceptualises FAIR assessment tools as analytical translation mechanisms situated between discipline-specific data practices and technical information infrastructure (Wilkinson et al. 2016). The key concern is on how these tools can be used across disciplinary boundaries to support heterogeneous data aggregation, bridging diverse (meta-)data practices. Based on an understanding of FAIR as a domain-independent information model, this contribution examines how individual assessment tools interpret and operationalize its principles differently.

Methodologically, this study draws on a comparative analysis of selected FAIR assessment tools (including FUJ-I, FAIR Assessment Tool by TKFDM, ACME-Fair) along with systematically developed criteria related to 15 principles and to 41 indicators (RDA FAIR Data Maturity WG 2020, p. 6) to evaluate these tools. To make the tools’ implicit requirements explicit, a reference dataset is outlined as a representation of key features to emphasize aspects for analysis. The heuristic guides dataset selection for subsequent comparative tool analysis.

The poster contributes to the discussion by showing how a transparent and reflective use of assessment metrics can help to frame metadata not merely as an unpopular requirement, but as a configurable means for supporting interdisciplinary data aggregation. 

Files

FAIR Assessment Tools_Poster_HMC_2026.pdf

Files (328.2 kB)

Name Size Download all
md5:02debde1565246dd6622694390fc9c14
328.2 kB Preview Download

Additional details

References

  • Devaraju, A. & Huber, R. (2021). An automated solution for measuring the progress toward FAIR research data. In: Patterns, vol. 2/11. DOI: 10.1016/j.patter.2021.100370
  • Gehlen, K.P., Höck, H., Fast, A., Heydebreck, D., Lammert, A. & Thiemann, H. (2022). Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools. In: Data Science Journal, vol. 21/7. DOI: 10.5334/dsj-2022-007
  • Pellegrino, M. A. & Tuozzo, G. (2025). How Fair is FAIR? Understanding LOD Cloud FAIRness Through Correlation Patterns. In: CIKM '25: Proceedings of the 34th ACM International Conference on Information and Knowledge Management, 2315-2325. DOI: 10.1145/3746252.3761092
  • RDA FAIR Data Maturity Model Working Group (2020). FAIR Data Maturity Model: specification and guidelines. Research Data Alliance. DOI: 10.15497/rda00045
  • Wilkinson, M.D. et al. (2016). The FAIR Guiding Principles for scientific data management and stewardship. In: Scientific Data 3 (160018). DOI: 10.1038/sdata.2016.18