Published November 16, 2023 | Version v1
Conference paper Open

Experimenting Task-specific LLMs

  • 1. ROR icon Radiotelevisione Italiana (Italy)

Abstract

In this work, we present an example of how a relatively small Large Language Model (LLM) fine-tuned
 to perform a simple and well defined task (assigning titles to news articles) could perform similarly or
 even better than huge LLMs which are created to respond to any question. This approach of specializing
 smaller LLMsonsimplertasksisalsointeresting because it goes in the direction of makingthis technology
 more sustainable and available to a higher number of entities that usually could not use these expensive
 models, both for economic and data policy reasons. We also present a couple of examples of how can be
 evaluated the performances of LLMs when the task is specified as in the example that we present in this
 work.

Files

paper10.pdf

Files (896.5 kB)

Name Size Download all
md5:0f47a065d1a77b89797ae951f6dc8c4a
896.5 kB Preview Download

Additional details

Funding

European Commission
AI4Media – A European Excellence Centre for Media, Society and Democracy 951911