Report Open Access
Nikolaos Stefanakis; Athanasios Mouchtaris
Exploiting correlations in the audio, several works in the past have demonstrated the ability to automatically match and synchronize User Generated Recordings (UGRs) of the same event. The synchronization process is of fundamental importance as it provides the basis for combining the different sources of content in order to improve the audiovisual experience of the captured event. In this paper, we show that depending on the complexity of the sound scene, the time offsets required to synchronize the audio recordings are not unique, and depend on the locations and the activity of the sound sources. We use simulation results to illustrate that this problem is very likely to occur in athletic events and we demonstrate how it may impair the listening experience.