A Multimodal Learning to Rank model for Web Pages
Creators
- 1. Department of Information Technology, Rajagiri School of Engineering & Technology, Rajagiri Valley, Kakkanad, Kochi, Kerala, India.
- 2. Division of Computer Science, School of Engineering, CUSAT, Kerala, India.
Contributors
- 1. Publisher
Description
“Learning-to-rank” or LTR utilizes machine learning technologies to optimally combine many features to solve the problem of ranking. Web search is one of the prominent applications of LTR. To improve the ranking of webpages, multimodality based Learning to Rank model is proposed and implemented. Multimodality is the fusion or the process of integrating multiple unimodal representations into one compact representation. The main problem with the web search is that the links that appear on the top of the search list may be either irrelevant or less relevant to the user than the one appearing at a lower rank. Researches have proven that a multimodality based search would improve the rank list populated. The multiple modalities considered here are the text on a webpage as well as the images on a webpage. The textual features of the webpages are extracted from the LETOR dataset and the image features of the webpages are extracted from the images inside the webpages using the concept of transfer learning. VGG-16 model, pre-trained on ImageNet is used as the image feature extractor. The baseline model which is trained only using textual features is compared against the multimodal LTR. The multimodal LTR which integrates the visual and textual features shows an improvement of 10-15% in web search accuracy.
Files
F1442089620.pdf
Files
(764.8 kB)
Name | Size | Download all |
---|---|---|
md5:cc41f49d047fa5cd13931e6519b68bcc
|
764.8 kB | Preview Download |
Additional details
Related works
- Is cited by
- Journal article: 2249-8958 (ISSN)
Subjects
- ISSN
- 2249-8958
- Retrieval Number
- F1442089620/2020©BEIESP