Published April 18, 2024 | Version 1.3.1
Dataset Open

Soundscape Attributes Translation Project (SATP) Dataset

  • 1. University College London
  • 2. Universidad de Granada
  • 3. University of Zagreb
  • 1. University College London
  • 2. Shimane University
  • 3. Chongqing University
  • 4. National Laboratory for Civil Engineering, Portugal
  • 5. University of Mohamed Kheider-Biskra
  • 6. Nanyang Technological University
  • 7. Universiti Putra Malaysia
  • 8. Stockholm University
  • 9. Politecnico di Torino
  • 10. Institut Teknologi Bandung
  • 11. Fukushima University
  • 12. Hanyang University
  • 13. University of Chile
  • 14. Université Gustave Eiffel
  • 15. University of Granada
  • 16. Technical University of Crete
  • 17. University of Groningen
  • 18. Müller-BBM GmbH
  • 19. University Austral of Chile
  • 20. Chungnam National University
  • 21. Federal University of Goiás
  • 22. University of Zagreb
  • 23. Blida University
  • 24. Cankaya University
  • 25. Shenyang Jianzhu University
  • 26. Universidad de Granada
  • 27. Technical University of Berlin
  • 28. University of São Paulo
  • 29. Biskra University
  • 30. CY Cergy Paris University

Description

The data and audio included here were collected for the Soundscape Attributes Translation Project (SATP). First introduced in Aletta et. al. (2020), the SATP is an attempt to provide validated translations of soundscape attributes in languages other than English. The recordings were used for headphones - based listening experiments.

The data are provided to accompany publications resulting from this project and to provide a unique dataset of 1000s of perceptual responses to a standardised set of urban soundscape recordings. This dataset is the result of efforts from hundreds of researchers, students, assistants, PIs, and participants from institutions around the world. We have made an attempt to list every contributor to this Zenodo repo; if you feel you should be included, please get in touch.

Citation: If you use the SATP dataset or part of it, please cite our paper describing the data collection and this dataset itself.

Overview: The SATP dataset consists of 27 30-sec binaural audio recordings made in urban public spaces in London and one 60 sec stereo calibration signal.

The recordings were made at locations as reported in Table 1 of the README.md (Recording locations), at various times of day by an operator wearing a binaural kit consisting of BHS II microphones and a SQobold (HEAD acoustics) device. Recordings were then exported to WAV via the ArtemiS SUITE software, using the original dynamic range from HDF. The listening experiment and the calibration procedure were intended for a headphone playback system (Sennheiser HD650 or similar open-back headphones recommended). 

The recordings were selected from an initial set of 80 recordings through a pilot study to ensure the test set had an even coverage of the soundscape circumplex space. These recordings were sent to the partner institutions (see Table 2 of the README.md) and assessed by approximately 30 participants in the institution's target language. The questionnaire used in each assessment is a translation of Method A Questionnaire, ISO 12913-2:2018. Each institution carried out their own lab experiment to collect data, then submitted their data to the team at UCL to compile into a single dataset. Some institutions included additional questions or translation options; the combined dataset (`SATP Dataset v1.x.xlsx`) includes only the base set of questions, the extended set of questions from each institution is included in the `Institution Datasets` folder.

In all, SATP Dataset v1.4 contains 19,089 samples, including 707 participants, for 27 recordings, in 18 languages with contributions from 29 institutions.

Descriptions of the recordings, including GPS coordinates and sound sources, can be found in the README.md file.

Format: The audio recordings are provided as 24 bit, 48 kHz, stereo WAV files. The combined dataset and Institutional datasets are provided as long tidy data tables in .xlsx files.

Calibration: The recommended calibration approach was based on the open-circuit voltage (OCV) procedure which was considered most accessible but other calibration procedures are also possible (Lam et. al. (2022)). The provided calibration file is a computer generated sine wave at 1kHz, matching a sine wave recorded using the exact same setup at SPL of 94 dB. In case of the calibration signal playback level set to match SPL of 94 dB at the eardrum, all the 27 samples should be reproduced at realistic loudness. More details on OCV calibration procedure and other options you can find in Lam et. al. (2022) and the attached documentation. PLEASE DO NOT EXPOSE YOURSELF NOR THE PARTICIPANTS TO THE CALIBRATION SIGNAL SET AT THE REALISTIC LEVEL AS IT CAN CAUSE HARM.

License and reuse: All SATP recordings are provided under the Creative Commons Attribution 4.0 International (CC BY 4.0) License and are free to use. We encourage other researchers to replicate the SATP protocol and contribute new languages to the dataset. We also encourage the use of these recordings and the perceptual data for further soundscape research purposes. Please provide the proper attribution and get in touch with the authors if you would like to contribute a new translation or for any other collaborations.

Notes

Additional funding provided by the UCL Cities Partnership Programme

Files

README.md

Files (203.6 MB)

Name Size Download all
md5:d0db14402004afebcbf9ea1e903b52e1
145.6 kB Preview Download
md5:3829eca535609eea1fa69757e7952239
114.6 kB Preview Download
md5:40d512e2c10c8e1aaca05131461bc7cf
166.5 kB Preview Download
md5:689ffdc930792793e2035cedbd3c22e2
1.7 MB Preview Download
md5:e632b53342c621498c7b1f19448cced7
4.0 kB Preview Download
md5:e5ae28f962926a7e66b3cfb11ff64e97
18.1 kB Preview Download
md5:353c564963d1171af97c003fae40ff4b
3.3 MB Download
md5:1592b4ac667b2944ad3d412212dc669a
198.1 MB Preview Download

Additional details

Funding

European Commission
SSID - Soundscape Indices 740696

References

  • Aletta, et. al. (2020). Soundscape assessment: towards a validated translation of perceptual attributes in different languages. Internoise 2020, Seoul.