Published November 15, 2021 | Version v1
Conference paper Open

Assessment of Self-Attention on Learned Features For Sound Event Localization and Detection

  • 1. Audio Research Group, Tampere University, Finland

Description

Joint sound event localization and detection (SELD) is an emerging audio signal processing task adding spatial dimensions to acoustic scene analysis and sound event detection. A popular approach to modeling SELD jointly is using convolutional recur- rent neural network (CRNN) models, where CNNs learn high-level features from multi-channel audio input and the RNNs learn temporal relationships from these high-level features. However, RNNs have some drawbacks, such as a limited capability to model long temporal dependencies and slow training and inference times due to their sequential processing nature. Recently, a few SELD studies used multi-head self-attention (MHSA), among other innovations in their models. MHSA and the related transformer networks have shown state-of-the-art performance in various domains. While they can model long temporal dependencies, they can also be parallelized efficiently. In this paper, we study in detail the effect of MHSA on the SELD task. Specifically, we examined the effects of replacing the RNN blocks with self-attention layers. We studied the influence of stacking multiple self-attention blocks, using multiple attention heads in each self-attention block, and the effect of position embeddings and layer normalization. Evaluation on the DCASE 2021 SELD (task 3) development data set shows a significant improvement in all employed metrics compared to the baseline CRNN accompanying the task.

Notes

The authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources. K. Drossos has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL.

Files

DCASE2021_Sudarsanam_et_al_SELD_self_attention.pdf

Files (219.2 kB)

Additional details

Related works

Is published in
Conference paper: 10.5281/zenodo.5770113 (DOI)
Is supplemented by
Dataset: 10.5281/zenodo.4844825 (DOI)

Funding

MARVEL – Multimodal Extreme Scale Data Analytics for Smart Cities Environments 957337
European Commission