Published February 4, 2025 | Version v1
Conference paper Open

Is CLIP the main roadblock for fine-grained open-world perception?

  • 1. EDMO icon CNR, Institute of Information Science and Technologies "Alessandro Faedo" - ISTI
  • 2. ROR icon University of Pisa
  • 3. Istituto di Scienza e Tecnologie dell'Informazione Alessandro Faedo Consiglio Nazionale delle Ricerche
  • 4. Consiglio Nazionale delle Ricerche Area della Ricerca di Pisa
  • 5. ROR icon National Research Council

Description

Modern applications increasingly demand flexible computer vision models that adapt to novel concepts not encountered during training. This necessity is pivotal in emerging domains like extended reality, robotics, and autonomous driving, which require the ability to respond to open-world stimuli. A key ingredient is the ability to identify objects based on free-form textual queries defined at inference time - a task known as open-vocabulary object detection. Multimodal backbones like CLIP are the main enabling technology for current open-world perception solutions. Despite performing well on generic queries, recent studies highlighted limitations on the fine-grained recognition capabilities in open-vocabulary settings - i.e., for distinguishing subtle object features like color, shape, and material. In this paper, we perform a detailed examination of these openvocabulary object recognition limitations to find the root cause. We evaluate the performance of CLIP, the most commonly used vision-language backbone, against a fine-grained objectmatching benchmark, revealing interesting analogies between the limitations of open-vocabulary object detectors and their backbones. Experiments suggest that the lack of fine-grained understanding is caused by the poor separability of object characteristics in the CLIP latent space. Therefore, we try to understand whether fine-grained knowledge is present in CLIP embeddings but not exploited at inference time due, for example, to the unsuitability of the cosine similarity matching function, which may discard important object characteristics. Our preliminary experiments show that simple CLIP latent-space re-projections help separate fine-grained concepts, paving the way towards the development of backbones inherently able to process fine-grained details. The code for reproducing these experiments is available at https://github.com/lorebianchi98/FG-CLIP.

Files

CBMI2024.pdf

Files (1.5 MB)

Name Size Download all
md5:f6d50530510d54f18cb3f3f3d21a762d
1.5 MB Preview Download

Additional details

Funding

European Commission
SUN - Social and hUman ceNtered XR 101092612