Enhancing the application of large language models with retrieval-augmented generation for a research community
Description
The demand for efficient and innovative tools in research environments is ever-increasing in the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML). This paper explores the implementation of retrieval-augmented generation (RAG) to enhance the contextual accuracy and applicability of large language models (LLMs) to meet the diverse needs of researchers. By integrating RAG, we address various tasks such as synthesizing extensive questionnaire data, efficiently searching through document collections, and extracting detailed information from multiple sources. Our implementation leverages open-source libraries, a centralized repository of pre-trained models, and high-performance computing resources to provide researchers with robust, private, and scalable solutions.
Files
USRSE24_LLMs.pdf
Files
(215.6 kB)
Name | Size | Download all |
---|---|---|
md5:298c293dd8e3c14c3752639eec9d4199
|
215.6 kB | Preview Download |
Additional details
Software
- Repository URL
- https://github.com/jgarciamesa/US-RSE24-RAG
- Programming language
- Python