Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)
Creators
- 1. Imperial College London
- 2. Friedrich-Alexander-Universitat Erlangen-Nurnberg
- 3. Humboldt-Universitat zu Berlin
Description
This repository contains the final release of the development and evaluation datasets for the LOCATA Challenge.
The challenge of sound source localization in realistic environments has attracted widespread attention in the Audio and Acoustic Signal Processing (AASP) community in recent years. Source localization approaches in the literature address the estimation of positional information about acoustic sources using a pair of microphones, microphone arrays, or networks with distributed acoustic sensors. The IEEE AASP Challenge on acoustic source LOCalization And TrAcking (LOCATA) aimed at providing researchers in source localization and tracking with a framework to objectively benchmark results against competing algorithms using a common, publicly released data corpus that encompasses a range of realistic scenarios in an enclosed acoustic environment.
Four different microphone arrays were used for the recordings, namely:
- Planar array with 15 channels (DICIT array) containing uniform linear sub-arrays
- Spherical array with 32 channels (Eigenmike)
- Pseudo-spherical array with 12-channels (robot head)
- Hearing aid dummies on a dummy head (2-channel per hearing aid).
An optical tracking system (OptiTrack) was used to record the positions and orientations of talker, loudspeakers and microphone arrays. Moreover, the emitted source signals were recorded to determine voice activity periods in the recorded signals for each source separately. The ground truth values are compared to the estimated values submitted by the participants using several criteria to evaluate the accuracy of the estimated directions of arrival and track-to-source association.
The datasets encompass the following six, increasingly challenging, scenarios:
- Task 1: Localization of a single, static loudspeaker using static microphones arrays
- Task 2: Multi-source localization of static loudspeakers using static microphone arrays
- Task 3: Localization of a single, moving talker using static microphone arrays
- Task 4: Localization of multiple, moving talkers using static microphone arrays
- Task 5: Localization of a single, moving talker using moving microphone arrays
- Task 6: Multi-source localization of moving talkers using moving microphone arrays.
The development and evaluation datasets in this repository contain the following data:
- Close-talking speech signals for human talkers, recorded use DPA microphones
- Distant-talking recordings using four microphone arrays:
- Spherical Eigenmike (32 channels)
- Pseudo-spherical prototype NAO robot (12 channels)
- Planar DICIT array (15 channels)
- Hearing aids installed in a head-torso simulator (4 channels)
- Ground-truth annotations of all source and microphone positions, obtained using an OptiTrack system of infrared cameras. The ground-truth positions are provided at the frame rate of the optical tracking system
The following software is provided with the data:
- Matlab code to read the datasets: github.com/cevers/sap_locata_io
- Matlab code for performance evaluation of localization and tracking algorithms: github.com/cevers/sap_locata_eval
For further information, see:
- C. Evers, H. W. Löllmann, H. Mellmann, A. Schmidt, H. Barfuss, P. A. Naylor, W. Kellermann
"The LOCATA Challenge: Acoustic Source Localization and Tracking," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 1620-1643, 2020, doi: 10.1109/TASLP.2020.2990485 - Documentation: https://www.locata.lms.tf.fau.de/files/2020/01/Documentation_LOCATA_final_release_V1.pdf