There is a newer version of the record available.

Published April 29, 2025 | Version 1
Video/Audio Open

Video Documentation for the Spark project 'Emerging Practices in Modular Synthesis: Towards a Virtual Ethnography for Mixed Realities'

Description

This repository holds a selection of video recordings that were created during the 'SNSF Spark project Emerging Practices in Modular Synthesis: Towards a Virtual Ethnography for Mixed Realities' between March 2024 and February 2025. The software used in these experiments is OpenSoundLab, which is available as open source on GitHub and Zenodo. A publication is currently in review by the journal 'Organised Sound' with the title 'Modular Observers. OpenSoundLab and PatchWorld as Case Studies for Emerging Practices of Modular Synthesis in Extended Realities'.

Description of the files:

  • Anselm Bauer Twitch Stream *.mkv
    These files document how Anselm Bauer integrated OpenSoundLab in his practice as a live streaming musician.
  • Anselm Bauer.mp4
    This file shows Anselm Bauer's experiments with OpenSoundLab in the context of the research project. These experiments led to a music paper contribution at NIME 25 together with Ludwig Zeller.
  • CoCreate.mp4
    This file shows how Ludwig Zeller used OpenSoundLab for the 'CoCreate' workshop week at the Basel Academy of Art and Design in September 2024. The video begins with excerpts of inputs on modular sound design by Ludwig Zeller (voice not included) and then shows various experiments and end result presentations by the students. "Depth Occlusion" was not yet implemented in that version of OpenSoundLab.
  • Dario Klein.mp4
    This file shows Dario Klein's experiments with OpenSoundLab in the context of the research project. Several jam sessions have been held at his studio space including other collaborators.
  • Thomas Meckel.mp4
    This file shows Thomas Meckel's experiments with OpenSoundLab in the context of the research project. Thomas Meckel developed and performed a percussive synthesis patch.

Description of the Spark project:

The study aims to identify the conditions necessary to translate the unique feeling and aura associated with physical modular synthesizers into virtual and mixed reality experiences. To achieve this objective, the research will utilize a ‘netnographic’ study approach, which involves analyzing social media content available on YouTube, Discord, Twitch, and Instagram. The focus of the study will be on emerging virtual and mixed reality (VR/MR) modular sound applications, such as Mux, Patchworld, SynthVR, SynthSpace VR, and Virtuoso. The research will also enhance existing methodologies of ‘virtual ethnography’ by adapting OpenSoundLab, an open-source mixed reality sound app that was developed in earlier research. The upgraded version will include multi-user and session recording functions to systematically test specific aspects, such as the degree of skeuomorphic or abstract presentations. Commercial standalone headsets will be utilized to record participants’ activities, facial expressions, and eye movements. The distribution of mixed reality headsets and the software tool will be considered a new type of cultural probe that allows gathering comprehensive and detailed data about participants’ experiences in an unobtrusive manner. The study aims to provide insights into subcultural perspectives, practices, and values surrounding emerging VR and MR modular sound applications.The results of this research project may help to overcome negative perceptions of digital music and sound production by exploring the combination of spatial affordance, simulated analog imperfections, and visual skeuomorphism. Additionally, the study aims to fill the gap in the existing research on virtual ethnography methods by exploring mixed reality settings. The project is situated at the intersection of human-computer interaction (HCI), interaction and game design, ethnographic methodology and sonic media research.

Files

Ansem Bauer.mp4

Files (14.9 GB)

Name Size Download all
md5:9e912df2251ff559eb5ff2391f4cf543
670.4 MB Download
md5:576e6056b318ac74571bc99fd9beb82c
1.4 GB Download
md5:71f024038db6e351262c9947d149daed
881.5 MB Download
md5:6301e2bb7eb77851405311ea4fd9d0e5
449.3 MB Download
md5:37ed49f1d1b01a67e787a8d3c8cb5298
3.0 GB Preview Download
md5:3c28b5e428d2a4602e9acb89d64d737f
6.5 GB Preview Download
md5:34a893a4dd7a7db3628d96294582e9dc
1.9 GB Preview Download
md5:d5407cd5aae540c34b87b5ba5d54d9a2
163.0 MB Preview Download

Additional details

Related works

Is supplemented by
Software: 10.5281/zenodo.15296640 (DOI)

Funding

Swiss National Science Foundation
Emerging Practices in Modular Synthesis: Towards a Virtual Ethnography for Mixed Realities 221307

Dates

Created
2024-02-28

Software

Repository URL
http://opensoundlab.org

References

  • Ludwig Zeller and Hannes Barfuss (2022). "OpenSoundLab – A virtual sound laboratory for the arts". In: Proceedings of the 17th International Audio Mostly Conference (AM '22). New York: Association for Computing Machinery. 159–162.
  • Ludwig Zeller and Anselm Bauer (2025, accepted). "The Walkable Instrument: Modular Patches as Entangled Environments in OpenSoundLab". In: Proceedings of NIME 2025.