Published January 28, 2019 | Version 1.0
Thesis Open

Real-Time Multi-Track Mixing For Live Performance

  • 1. Universitat Pompeu Fabra

Description

Finding the balance of sounds in a multitrack recording is always a time consuming process performed by experienced professionals. A poor mix produces a mix where is difficult to make the sounds stand out or enhance their presence, also creating a recording where clarity is not perceived. Auditory Masking of tracks is a common problem affecting the presence of instruments in the mix, making some elements indistinguishable and not audible.

This thesis analyses previous research in the field of automatic mixing to find methods to avoid Auditory Masking in multitrack performance. A set of tools are developed during this process to help reduce Auditory Masking by implementing different state-of-the-art techniques as well as a standard measurement of Auditory Masking.

Previous research in the field of automatic mixing is intended to improve mixing for studio recordings. This thesis aims to apply this knowledge to the case of real-time performance where different considerations apply. Unlike similar approaches in the state of the art, the unmasking tool implemented during this thesis is designed to work in real-time.

Three different versions of unmasking tools between two channels in real-time are developed as a result of this research. Each version implements different techniques with similar objectives but different advantages. The result of using this tools is evaluated both quantitatively and quantitatively to find which one provides the best results.

Using tools like the ones developed in this thesis have the potential of helping both experienced and amateur music producers and performers.

Files

Master Thesis_ Real-time multi-track mixing for live performance - Joaquin Jimenez-Sauma 2019.pdf

Additional details

References

  • De Man, B., Reiss, J. (2017). Ten years of automatic mixing. Queen Mary University of London
  • Kokkinis, E. (2017). Artificial Intelligence in Music Making: is a Jobless Future Ahead of Us?. Retrieved from: http://www.avidblogs.com/artificial-intelligence-music-making-jobless-future-ahead-us/
  • Moore, B. (2013). An introduction to the psychology of hearing. Academic Press, 6th Edition
  • Hermes, K. (2017). Towards measuring music mix quality: the factors contributing to the spectral clarity of single sounds. University of Surrey
  • Hafezi, S., Reiss, J. (2015). Autonomous Multitrack Equalisation Based on Masking Reduction. Centre for Digital Music. Queen Mary University of London, London, UK
  • Martinez Ramirez, M., Reiss, J. (2017). Stem Audio Mixing as a Content-Based Transformation of Audio Features. Centre for Digital Music. Queen Mary University of London, London, UK
  • Aichinger, P., Sontacchi A., Schneider-Stickler B. (2011). Describing the Transparency of Mixdowns: The Masked-to-Unmasked-Ratio. 130th Audio Engineering Society Convention
  • Vega, S., Janer, J. (2010). Quantifying Masking in Multi-Track Recordings. Universitat Pompeu Fabra
  • Puckette, M., Apel, T., Zicarelli, D. (1998). Real-time audio analysis tools for Pd and MSP. UCSD, Cycling '74
  • Koria, R. (2016). Real-Time Adaptive Audio Mixing System Using Inter-Spectral Dependencies. Linköping University
  • D. Dugan. Automatic Microphone Mixing. J. Audio Eng. Soc., vol. 23, June 1975
  • Ronan, D. Ma, Z. Nanamanra, P. Gunes, H. Reiss. J. (2018). Automatic Minimisation of Masking in Multitrack Audio using Subgroups
  • ANSI S1.11: Specification for Octave, Half-Octave, and Third Octave Band Filter Sets. American National Standards Institute (2004)
  • Wakefield, J., Dewey C. (2015). An Investigation into the efficacy of methods commonly employed by mix engineers to reduce frequency masking in the mixing of multitrack musical recordings. University of Huddersfield
  • Perez-Gonzalez, E., Reiss, J. (2008). Improved Control Of Selective Minimisation of Masking Using Inter-Channel Dependency Effects. Centre for Digital Music, Queen Mary University of London
  • Perez-Gonzalez, E., Reiss, J. (2008). Automatic Equalisation of Multi-Channel Audio Using Cross-Adaptive Methods. Centre for Digital Music, Queen Mary University of London
  • Gibson, D. The Art Of Mixing. A Visual Guide to Recording Engineering, and Production. Mix Books. 2005
  • Owsinski, B. The Mixing Engineer's Handbook 4th Edition. Bobby Owsinski Media Group. 2017.
  • White, P. Mixing Essentials. Sound On Sound Magazine (2006). Retrieved from: www.soundonsound.com/techniques/mixing-essentials.
  • Ma, Z. (2016). Intelligent Tools for Multitrack Frequency and Dynamics Processing. Queen Mary University of London
  • Verfaille, V., Arfib, D. A-DAFX: Adaptive Digital Audio Effects. CNRS - LMA (2001).
  • Baytas, M., Göksun, T., Özcan, O. The Perception of Live-sequenced Electronic Music via Hearing and Sight. NIME'16 (2016).
  • Terrell, M., Simson, A., Sandler, M. The Mathematics of Mixing. Queen Mary University of London (2014).
  • Principles of Equalization (2013). Retrieved from: www.izotope.com/en/blog/mixing/principles-of-equalization.html
  • Välimaki, Vesa., Reiss, J. All About Audio Equalisation: Solutions and Frontiers. Department of Signal Processing and Acoustics, Aalto University (2016).
  • Charles, J. A Tutorial On Special Sound Processing Using Max/MSP and Jitter. Computer Music Journal. MIT (2008).
  • Muller, H. Designing Multithreaded And Multicore Audio Systems. XMOX Ltd. Bristol, UK (2011).
  • Real-Time Musical Applications on an Experimental Operating System for Multi-Core Processors. University of California, Berkeley (2011).
  • Elsea, P. The Art And Technique of Electroacoustic Music. A-R Editions (2013).
  • Dolson, D. The Phase Vocoder: A Tutorial. Computer Music Journal, Vol 10, No. 4. (Winter, 1986).
  • Schuett, N. The Effects of Latency on Ensemble Performance. Stanford University (2002).
  • C. Bartlette, D. Headlam, M. Bocko, and G. Velikic. Effect of Network Latency on Interactive Musical Performance. Music Perception. Volume 24, Issue 1 (2006).
  • Elowsson, A., Friberg, A. Long-term Average Spectrum in Popular Music and its Relation to the Level of the Percussion. KTH Royal Institute of Technology, School of Computer Science and Communication, Speech, Music and Hearing. 2017.
  • Duarte Pestana, P., Ma, Z., Reiss, D., Barbosa, A. Black, D. Spectral Characteristics of Popular Commercial Recordings 1950-2010. Catholic University of Oporto, Queen Mary University of London, 2013.