Thesis Open Access
Francesc Lluís Salvadó
Currently, most successful source separation techniques use magnitude spectrograms as input, and are therefore by default discarding part of the signal: the phase. In order to avoid discarding potentially useful information, we propose an end-to-end learning model based on Wavenet for music source separation. As a result, the model we propose directly operates over the waveform, enabling, in that way, to consider any information available in the raw audio signal. Provided that the original Wavenet model operates sequentially (i.e., is not parallelizable and hence slow), in this work we make use of a discriminative non-causal adaptation of Wavenet capable to predict more than one sample at a time, thus permitting to overcome the undesirable time-complexity that the original Wavenet model has. Further, we investigate several data augmentation techniques and architectural changes to provide some insights on which are the most sensitive hyper-parameters for this family of Wavenet-like models. Our experimental results show that it is possible to approach the problem of music source separation in a end-to-end learning fashion, since our model performs on par with DeepConvSep, a state-of-the-art method based on processing magnitude spectrograms.