Thesis Open Access

SATB Voice Segregation For Monoaural Recordings

Pétermann, Darius A,


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Pétermann, Darius A,</dc:creator>
  <dc:date>2020-09-15</dc:date>
  <dc:description>Choral singing is a widely practiced form of ensemble singing wherein a group of people sing simultaneously in polyphonic harmony. The most commonly practiced setting for choir ensembles consists of four parts; Soprano, Alto, Tenor and Bass (SATB), each with its own range of fundamental frequencies (F0s). The task of source separation for this choral setting entails separating the SATB mixture into its constituent parts. Source separation for musical mixtures is well studied and many Deep Learning-based methodologies have been proposed for the same. However,
most of the research has been focused on a typical case which consists in
separating vocal, percussion and bass sources from a mixture, each of which has a distinct spectral structure. In contrast, the simultaneous and harmonic nature of ensemble singing leads to high structural similarity and overlap between the spectral components of the sources in a choral mixture, making source separation for choirs a harder task than the typical case. This, along with the lack of an appropriate consolidated dataset has led to a dearth of research in the field so far. In this work we first assess how well some of the recently developed methodologies for musical source separation perform for the case of SATB choirs. We then propose a novel domain-specific adaptation for conditioning the recently proposed U-Net architecture
for musical source separation using the fundamental frequency contour of
each of the singing groups and demonstrate that our proposed approach surpasses results from domain-agnostic architectures. Lastly we assess our approach using different evaluation methodologies, going from objective to subjective-based ones, and provide a comparative analysis of the various results.</dc:description>
  <dc:identifier>https://zenodo.org/record/4091247</dc:identifier>
  <dc:identifier>10.5281/zenodo.4091247</dc:identifier>
  <dc:identifier>oai:zenodo.org:4091247</dc:identifier>
  <dc:relation>doi:10.5281/zenodo.4091246</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/mtgupf</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/smc-master</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/3.0/legalcode</dc:rights>
  <dc:subject>source separation, singing voice, SATB recording, convolutional neural networks, choir music.</dc:subject>
  <dc:title>SATB Voice Segregation For Monoaural Recordings</dc:title>
  <dc:type>info:eu-repo/semantics/doctoralThesis</dc:type>
  <dc:type>publication-thesis</dc:type>
</oai_dc:dc>
198
138
views
downloads
All versions This version
Views 198198
Downloads 138138
Data volume 1.7 GB1.7 GB
Unique views 165165
Unique downloads 116116

Share

Cite as