COMPARING MULTILINGUAL LANGUAGE MODELS ON INDIC NEWS HEADLINE CLASSIFICATION
Authors/Creators
Description
This work explores the problem of news headline classification in Natural Language Processing (NLP). This is a widely studied topic in the realm of NLP. However, limited work has been done on multilingual text classification (specific to Indic languages). Indic language models focus primarily on widely spoken Indian Languages. The performance of these multilingual language models is measured on the Indic News Headline dataset (iNLTK). This dataset also serves as a genre classification dataset (containing ten different genres/categories of headlines). This is done for languages such as Gujarati, Malayalam, Marathi, Tamil, and Telugu (non-Latin scripts). Indic languages are sometimes challenging as the data may contain English (Latin script) mixed with non-Latin script. The performance of recently released models such as Saravam-1 (an LLM launched by Saravam AI) is compared to traditional approaches (BERT-like models) such as DistilBERT, XLMRoBERTa, and IndicBERT. The Saravam-1 LLM model is fine-tuned using the PEFT LoRA approach. The performance is then compared using weighted precision, recall, and F1 scores with the fine-tuned version of the other three BERT-like models. The performance of Saravam-1 LLM stands out from other BERT models with a weighted F1 score of 0.87 on the test set. The other three fine-tuned models, XLMRoBERTa, DistilBERT, and IndicBERT, still perform reasonably with weighted F1 scores of 0.84, 0.82, and 0.79.
Files
IJAIRD_02_02_011.pdf
Files
(4.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:8be33532a4bca528e5861db419eb27b4
|
4.1 MB | Preview Download |