There is a newer version of the record available.

Published February 22, 2022 | Version v1
Report Restricted

Exploring and with a Hackathon and expert quality assessment

  • 1. The Royal Library/Århus University Library
  • 2. The Royal Library/ Copenhagen University Library


This is the third status report from our project investigating the potentials of AI-powered search in the context of the university library. The overall aim of this project is to investigate the extent different artificial intelligence (AI) powered search software support researchers and students in academic literature search. Our findings will inform future services at the university libraries affiliated to the Royal Library in Denmark.


Support through searching using AI-powered technologies can be provided in a multitude of ways – through saving time, through searching more efficiently and effectively, through providing new avenues of discovery and through transparency and documentation. We have a working hypothesis in the project that different types of users (student, senior researchers and library staff) place different values on different types of support. In the first delivery project we stated how AI-powered search is actually defined within the confines of the project, and which AI-software will be tested and why, (Wildgaard, Johnsen, & Kiersgaard, 2020). The chosen software, and, has been tested throughout the Spring/Summer of 2021 in Think-Aloud tests with information specialists as test-participants. The aim, to investigate the AI-systems from a search professional point of view, (Appendix 1). The results of the think-aloud tests showed that information specialists are influenced by their background in information science, where they work in curated databases with block searches, thesauri and document representations. It is immediately clear that the test systems presented a whole new way of working with information retrieval and literature reviews. Our test subjects found it interesting to dive into these systems, but it frustrated them that they could not understand what was going on "behind the scenes" and in the "engine room". Understanding the system architecture is a very important aspect for information specialists in relation to teaching, support, and being able to provide high-quality professional search and research support. Both and confronted the professional competence of our test subjects and challenged search methodology and research integrity, specifically transparency and documentation. These results challenge the use of AI-systems in a library context as new definitions and requirements to an academic search need to be defined. In the Think-aloud Tests the focus was on the functionality of the systems. We were not able to analyze the extent the goals of an academic search based on a well-defined research question are supported by the AI systems. Neither could we test how these systems create value for users. Therefore, in the Autumn of 2021, we invited researchers and information specialists to a Hackathon, where we had the opportunity to learn more about the efficiency and relevance of AIpowered search systems in "real-life" academic search, where transparency, reproducibility, stability, documentation and reliability are key. A Hackathon is a hosted event where participants work in collaboration to rapidly complete set tasks and solve problems but they are also being used to provide learning and knowledge exchange opportunities, to create new and enhance existing social connections, and to investigate new technical opportunities. A Hackathon demands careful planning and explicit goals. Our goal was to use the Hackathon event to challenge participants to be creative in their searches, apply their analytical skills and at the same time deliver insights in to their needs for AI-enhanced literature search support at the library. 

The main conclusion is that the tested AI-powered systems force the user to let go of traditional approaches to search, for example block searches, proximity operators and Boolean logic and broaden our perception of how an academic search can be conducted. Such a reflection can cause a paradigm shift in information seeking practice, that demands new terminology, understandings, standards and expectations to the search and also to the competencies of the searchers. Our hackathon indicates that increased knowledge and skills in system architecture, source criticism and research conduct are essential.



The record is publicly accessible, but files are restricted to users with access.