Published April 25, 2024 | Version v1
Journal article Open

Visually Grounded Few-Shot Word Learning in Low-Resource Settings

  • 1. ROR icon Stellenbosch University
  • 2. POLITEHNICA Bucharest

Description

We propose a visually grounded speech model that learns new words and their visual depictions from just a few word-image example pairs. Given a set of test images and a spoken query, we ask the model which image depicts the query word. Previous work has simplified this few-shot learning problem by either using an artificial setting with digit word-image pairs or by using a large number of examples per class. Moreover, all previous studies were performed using English speech-image data. We propose an approach that can work on natural word-image pairs but with less examples, i.e. fewer shots, and then illustrate how this approach can be applied for multimodal few-shot learning in a real low-resource language, Yorùbá. Our approach involves using the given word-image example pairs to mine new unsupervised word-image training pairs from large collections of unlabelled speech and images. Additionally, we use a word-to-image attention mechanism to determine word-image similarity. With this new model, we achieve better performance with fewer shots than previous approaches on an existing English benchmark. Many of the model's mistakes are due to confusion between visual concepts co-occurring in similar contexts. The experiments on Yorùbá show the benefit of transferring knowledge from a multimodal model trained on a larger set of English speech-image data.

Files

2306.11371v3.pdf

Files (19.1 MB)

Name Size Download all
md5:425976eaeeee44afacb1367ce4787dd8
19.1 MB Preview Download

Additional details

Funding

European Commission
AI4TRUST – AI-based-technologies for trustworthy solutions against disinformation 101070190