New Data and Code for Customised Research Evaluation
Description
Introduction
There are continuing calls to adapt or even redesign research evaluation. These calls are based on goals that are part of the transitions towards open science and fairer recognition and rewards. While most calls for changes in research evaluation critique the application of crude metrics, they still see a role for (quantitative and qualitative) evidence to inform and balance narratives used in assessment. This poster matches ongoing changes at the supply side of data (sharing platforms, registries, aggregators, service providers) with specific tenets on the demand side (open science practices, bibliodiversity, inclusivity, team science, formative assessment, academic governance and more). It includes promising examples of open source and collaborative code already available or under development and suggests next steps for the research intelligence community in facilitating this development towards supporting customized research evaluations.
1. Designing research evaluation
There are continuing calls to adapt or even redesign research evaluation. While the Declaration on Research Assessment (ASCB, 2012) and the Agreement on Reforming Research Assessment (CoARA, 2022) are the most prominent, with many organisations and individuals subscribing, there are many more calls, either from specific stakeholder groups, specific disciplines or specific countries and regions, as described in the report from the Global Young Academy together with the International Science Council and the InterAcademy Partnership (De Rijcke 2023).
These calls have a variety of backgrounds. First, there is the drive to fairer recognition and rewards that rebalance research with teaching and societal impact and that moves away from questionable crude quantitative proxies of quality and support more collaborative approaches to the research practice. Second, there is the movement to transition towards open science, that values reproducible research practice, public engagement initiatives and early and fully open sharing of a much wider range of research materials and outputs. Further backgrounds are the need to disincentivize questionable research practices and challenges put forth by AI.
2. Traditional indicators
While most calls for changes in research evaluation critique the application of crude metrics, they never ban all types of indicators and do see a role for (quantitative and qualitative) evidence to inform and balance narratives used in assessment. Some call for more indicators, some for more relevant indicators, some for more transparent indicators and some for other roles for indicators, or any combination of these. What is clear is that the limited suite of traditional indicators (peer reviewed article output,productivity and impact indexes and grant income generated) is no longer seen as sufficient or even relevant to evaluate research within goals set by open science and renewed recognition and rewards.
3. Supply of metadata and PIDs fitting open science and recognition and rewards transitions
This poster reports on an exploration of the availability of (meta)data of research output and practices that better fit evaluations that take into account the goals of open science and reform of recognition and rewards. It builds on an earlier visualization (Bosman 2024) developed in the context of a forthcoming report on ‘next-generation metrics’ for the League of European Research Universities LERU).
4. Assessing availability and fitness for purpose of new metadata and PIDs
The poster visually details ongoing changes at the supply side of data (formed by sharing platforms, registries, aggregators, service providers) that have led to improved inclusion of various output types, improved availability of persistent identifiers, openness and license detection, provision of programmable interfaces (APIs) and more. On the demand side it translates the various tenets of the open science and recognition and rewards goals (open science practices, bibliodiversity, inclusivity, team science, formative assessment, academic governance) to indicator requirements. Characteristics of indicator requirements on the demand side are matched with new characteristics and improved qualities on the (meta)data supply side. This leads to insights on what is available to facilitate new types of research evaluation, and also on what is still lacking.
5. Addressing the need for customised evaluations with open source and collaborative code
It is proposed that in order to fit the new evaluation needs the widening range of available data requires code to match it to that diversity of needs and use cases. Especially for evaluations that are intended to be formative and for evaluations at levels other than the individual or institution - such as the team, project, call or programme level - code will be needed to aggregate, select, combine or enrich the available (meta)data.
As a result of our exploration the poster includes a list of promising examples of such open source and collaborative code already available or under development and suggests next steps for the research intelligence community in facilitating this development towards supporting customized research evaluations.
6. Customised, code based evaluations as opportunity for research intelligence communities
The conclusion of the exploration depicted in the poster is that while there are many promising developments in (meta)data provision that facilitate more diverse, fair and relevant research evaluations, there are also qualities with lacking or limited availability. The need for new code that can match (meta)data supply with indicator demands to support customized evaluations is presented as a challenge and opportunity for the scientometric and research intelligence communities.
Open science practices used and addressed in the poster
The poster is fully open licensed (CC-BY), for anyone to reuse. The contents of the poster itself aims to foster open source and open data based research evaluation, in which collaboration between scientometricians and the research intelligence communities is seen as a strategic facilitator of customised research evaluations that in turn incentivise open science.
CREDIT taxonomy for contributor roles
Both authors contributed equally to conceptualisation, investigation, visualization and writing of the poster.
Conflicts of interests
Jeroen Bosman is member of a LERU working group on Next Generation metrics. Bianca Kramer is one of the core people behind the Barcelona Declaration on Open Research Information.
References
- American Society for Cell Biology (ASCB) (2012). San Francisco Declaration on Research Assessment (DORA). https://sfdora.org/read/
- Bosman, Jeroen (2024). Changes in demand and supply of metrics for research evaluation in the context of open science and new R&R. Zenodo. https://doi.org/10.5281/zenodo.10569961
- Coalition for Advancing Research Assessment (CoARA) (2022). Agreement on reforming research assessment. https://coara.eu/agreement/the-agreement-full-text/
- De Rijcke, Sarah, et al., (2023). The Future of Research Evaluation: A Synthesis of Current Debates and Developments. https://doi.org/10.24948/2023.06
Notes
Files
Poster Bosman & Kramer STI 2024.pdf
Files
(3.2 MB)
Name | Size | Download all |
---|---|---|
md5:c9a00c712668eb519a9d4ed8ad7a6780
|
1.6 MB | Preview Download |
md5:c051fbe61718cd9ae08126cc932527c9
|
1.6 MB | Preview Download |
Additional details
Related works
- Is derived from
- Presentation: 10.5281/zenodo.10569961 (DOI)