Synth-Salience Choral Set
Description
The Synth-salience Choral Set (SSCS) is a publicly available dataset for voice assignment based on pitch salience.
The dataset was created to support research on voice assignment based on pitch salience. By definition, an “ideal” pitch salience representation of a music recording is zero everywhere where there is no perceptible pitch, and has a positive value that reflects the pitches’ perceived energy at the frequency bins of the corresponding F0 values. In practice, for a normalized synthetic pitch salience function we assume a value equal to the maximum energy (salience), i. e., 1, in the time-frequency bins that correspond to the notes present in a song, and 0 elsewhere. We obtain such a synthetic pitch salience representation directly by processing the digital (MusicXML, MIDI) score of a music piece, using the desired time and frequency quantization, i. e., a time-frequency grid.
To build the SSCS, we collect scores of four-part (SATB) a cappella choral music from the Choral Public Domain Library (CPDL) using their API. We assemble a collection of 5381 scores in MusicXML format, which we subsequently convert into MIDI files for an easier parsing.
Each song in the dataset comprises five CSV files: one with the polyphonic pitch salience representation of the four voices (*_mix.csv) and four additional files with the monophonic pitch salience representation of each voice separately (*_S/A/T/B.csv). In both cases, the asterisk refers to the name of the song, which is shared between all representations from the same song.
Besides the pitch salience files, we provide a metadata CSV file (sscs_metadata.csv) which indicates the associated CPDL URL for each song in the dataset. Note that this dataset contains the input/output features used in the cited study, i.e., salience functions, and not audio files nor scores. However, the accompanying metadata file allows researchers to access the associated open access scores for each example in the dataset.
When using this dataset for your research, please cite:
Helena Cuesta and Emilia Gómez (2022). Voice Assignment in Vocal Quartets using Deep Learning Models based on Pitch Salience. Transactions of the International Society for Music Information Retrieval (TISMIR). To appear.
Helena Cuesta (2022). Data-driven Pitch Content Description of Choral Singing Recordings. PhD thesis. Universitat Pompeu Fabra, Barcelona.
Files
SynthSalienceChoralSet_v1.zip
Files
(2.3 GB)
Name | Size | Download all |
---|---|---|
md5:6b83cef701b3c4703af741b55618569a
|
2.3 GB | Preview Download |