Published September 15, 2022 | Version v1
Thesis Open

Multimodal Urban Scene Understanding

  • 1. Universitat Pompeu Fabra

Contributors

  • 1. Universitat Pompeu Fabra

Description

Early computational approaches for sound source localization, originating in robotics, were modeled after animal perception and utilized audiovisual synchrony and spatial information inferred from multichannel audio. More recent deep learning-based
methods focus on learning semantic audiovisual representations in a self-supervised manner and using them for localizing sounding objects. A majority of these approaches by design exclude information that comes from the temporal context that a video provides. While that is not a hurdle for widely used benchmark datasets because of the bias towards having large single objects in the middle of the image, the methods fall short on more challenging scenarios like urban traffic videos. This thesis aims to explore methods to introduce temporal context into the state-of-the-art methods for sound source localization in urban scenes. Optical flow is used as a means to encode motion information. An analysis of the strengths and weaknesses of our methods helps us better understand the problem of visual sound source localization and sheds new light on the characteristics of our dataset. 

Files

2022_Rajsuryan-Singh.pdf

Files (4.4 MB)

Name Size Download all
md5:4ab1395627471c5ef6670c3c34b346af
4.4 MB Preview Download