Dataset Open Access

Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)

Evers, Christine; Loellmann, Heinrich; Mellmann, Heinrich; Schmidt, Alexander; Barfuss, Hendrik; Naylor, Patrick A.; Kellermann, Walter


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Evers, Christine</dc:creator>
  <dc:creator>Loellmann, Heinrich</dc:creator>
  <dc:creator>Mellmann, Heinrich</dc:creator>
  <dc:creator>Schmidt, Alexander</dc:creator>
  <dc:creator>Barfuss, Hendrik</dc:creator>
  <dc:creator>Naylor, Patrick A.</dc:creator>
  <dc:creator>Kellermann, Walter</dc:creator>
  <dc:date>2020-01-31</dc:date>
  <dc:description>This repository contains the final release of the development and evaluation datasets for the LOCATA Challenge.

The challenge of sound source localization in realistic environments has attracted widespread attention in the Audio and Acoustic Signal Processing (AASP) community in recent years. Source localization approaches in the literature address the estimation of positional information about acoustic sources using a pair of microphones, microphone arrays, or networks with distributed acoustic sensors. The IEEE AASP Challenge on acoustic source LOCalization And TrAcking (LOCATA) aimed at providing researchers in source localization and tracking with a framework to objectively benchmark results against competing algorithms using a common, publicly released data corpus that encompasses a range of realistic scenarios in an enclosed acoustic environment.

Four different microphone arrays were used for the recordings, namely:


	Planar array with 15 channels (DICIT array) containing uniform linear sub-arrays
	Spherical array with 32 channels (Eigenmike)
	Pseudo-spherical array with 12-channels (robot head)
	Hearing aid dummies on a dummy head (2-channel per hearing aid).


An optical tracking system (OptiTrack) was used to record the positions and orientations of talker, loudspeakers and microphone arrays. Moreover, the emitted source signals were recorded to determine voice activity periods in the recorded signals for each source separately. The ground truth values are compared to the estimated values submitted by the participants using several criteria to evaluate the accuracy of the estimated directions of arrival and track-to-source association. 

The datasets encompass the following six, increasingly challenging, scenarios:


	Task 1: Localization of a single, static loudspeaker using static microphones arrays
	Task 2: Multi-source localization of static loudspeakers using static microphone arrays
	Task 3: Localization of a single, moving talker using static microphone arrays
	Task 4: Localization of multiple, moving talkers using static microphone arrays
	Task 5: Localization of a single, moving talker using moving microphone arrays
	Task 6: Multi-source localization of moving talkers using moving microphone arrays.


The development and evaluation datasets in this repository contain the following data:


	Close-talking speech signals for human talkers, recorded use DPA microphones
	Distant-talking recordings using four microphone arrays:
	
		Spherical Eigenmike (32 channels)
		Pseudo-spherical prototype NAO robot (12 channels)
		Planar DICIT array (15 channels)
		Hearing aids installed in a head-torso simulator (4 channels)
	
	
	Ground-truth annotations of all source and microphone positions, obtained using an OptiTrack system of infrared cameras. The ground-truth positions are provided at the frame rate of the optical tracking system


The following software is provided with the data:


	Matlab code to read the datasets: github.com/cevers/sap_locata_io
	Matlab code for performance evaluation of localization and tracking algorithms: github.com/cevers/sap_locata_eval


For further information, see:


	C. Evers, H. W. Löllmann, H. Mellmann, A. Schmidt, H. Barfuss, P. A. Naylor, W. Kellermann
	"The LOCATA Challenge: Acoustic Source Localization and Tracking," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 1620-1643, 2020, doi: 10.1109/TASLP.2020.2990485
	Documentation: https://www.locata.lms.tf.fau.de/files/2020/01/Documentation_LOCATA_final_release_V1.pdf
</dc:description>
  <dc:identifier>https://zenodo.org/record/3630471</dc:identifier>
  <dc:identifier>10.5281/zenodo.3630471</dc:identifier>
  <dc:identifier>oai:zenodo.org:3630471</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>doi:10.5281/zenodo.3630470</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://opendatacommons.org/licenses/by/1.0/</dc:rights>
  <dc:title>Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)</dc:title>
  <dc:type>info:eu-repo/semantics/other</dc:type>
  <dc:type>dataset</dc:type>
</oai_dc:dc>
1,683
10,025
views
downloads
All versions This version
Views 1,6831,683
Downloads 10,02510,025
Data volume 99.2 TB99.2 TB
Unique views 1,3861,386
Unique downloads 2,1012,101

Share

Cite as