Published July 1, 2024 | Version v1
Presentation Open

Fostering co-responsibility for open metadata quality to evaluate and monitor Open Science

  • 1. ROR icon IE University
  • 2. ROR icon Wageningen University & Research

Description

Open Science (OS) has undergone a major evolution in recent years, overlapping in time with the development of the responsible use of metrics (RMs). Moreover, Open Science has established itself as an essential paradigm for the advancement of knowledge, promoting transparency, collaboration, and accessibility in research. RMs also foster principles around openness, transparency, fairness, diversity, and equality. In brief, both seek to achieve more integrity in research, based on transparent and rigorous data. As the Open Science movement continues to reshape scholarly communication as well as bibliometrics, the accurate measurement and monitoring of metadata quality becomes pivotal.

Following the COARA recommendations, the evaluation and monitoring of OS heavily depends on the ability to measure the impact and relevance of research, considering a wide range of outputs and activities. Similarly, any responsible metrics system lies in the quality of the metadata associated with datasets, and more importantly, in comprehensive data, that include not only data from specific journals biased by language, discipline, or novelty. In this context, the range of data collection and their metadata quality emerges as a critical component to enable responsible metrics that drive effective evaluation and monitoring of OS.

This presentation seeks to explore how data harvesting and metadata curation contribute to building a more robust and ethical scientific environment, and aims to dissect the relationship between metadata quality, transparency, and the efficacy of open metrics in influencing research evaluation and monitoring (which are not the same) practices within Open Science.

As more and more organizations look to switch to open metadata sources for research intelligence, the idea is to explore ways we can work together across all the different actors in order to ensure the needed growth in data quality of open metadata sources. Focusing on the nuanced aspects of metadata inconsistencies (completeness and correctness of data), the presentation will scrutinize the impact of these gaps on the reliability of open metrics and their overall effectiveness in reshaping research evaluation. The presentation highlights some challenges and proposes some strategies for improving metadata quality, promoting transparency, and enhancing the overall robustness of open metadata. By sharing best practices and lessons learned, the session aims to inspire actionable steps for stakeholders to integrate into their research evaluation workflows.

The session concludes with a call to action, encouraging participants to actively contribute to the ongoing conversation around research metadata quality and transparency. Embracing a collaborative mindset, attendees are invited to join efforts in reshaping the future of open metrics and fostering a more reliable and transparent research evaluation landscape based on open metadata. By addressing these issues with a collective focus on open metadata quality, we can pave the way for a robust, trustworthy, and effective open infrastructure.

Files

Session4_Gomez-Huidiu_LIBER2024.pdf

Files (3.8 MB)

Name Size Download all
md5:fcfb94549460ddfb9d90140ed53a1678
3.8 MB Preview Download