GRAIL: Developing responsible practices for AI and machine learning in research funding and evaluation with a community of learning
- 1. University of Pittsburgh
- 2. The University of Sheffield Information School
Description
Presentation for 2024 Data for Policy conference.
Associated with discussion paper 10.5281/zenodo.11556314.
New developments in artificial intelligence (AI) and machine learning (ML) technologies are opening new avenues for research funding organisations to learn from the rich data sources and internal expertise they have curated over decades, and to develop new data-driven practices in response to rapid scientific development and changing policy environments. However, there is a lack of shared experience and best practice in using AI and ML in the work of research funding and evaluation, and it is often unclear how developments in AI Safety and Responsible AI discourses translate into practical insights for complex organisations like research funders.
The Research on Research Institute’s GRAIL project is an ongoing effort drawing on a community of learning among research funding organisations to develop specific insights, pathways, and critical questions to guide responsible use of AI and ML in the research funding ecosystem. This extended abstract highlights emerging themes and learning opportunities from the ongoing GRAIL workshop series, as key directions of travel for developing best practice around the use of AI and ML in the research funding ecosystem.
The GRAIL project is funded by the Research on Research Institute.
Files
4190_Newman-Griffis.pdf
Files
(1.5 MB)
Name | Size | Download all |
---|---|---|
md5:7fac72c4d294dc0d5f127f89e851a2c5
|
1.5 MB | Preview Download |
Additional details
Related works
- Is part of
- Conference paper: 10.5281/zenodo.11556314 (DOI)
Dates
- Available
-
2024-07