Published June 21, 2019 | Version pre-print
Presentation Open

Multi-sensor capture and network processing for virtual reality conferencing

Description

Recent developments in key technologies like 5G, Augmented and
Virtual Reality (VR) and Tactile Internet result into new possibilities
for communication. Particularly, these key digital technologies can
enable remote communication, collaboration and participation in
remote experiences. In this demo, we work towards 6-DoF photorealistic
shared experiences by introducing a multi-view multisensor
capture end-to-end system. Our proposed system acts as a
baseline end-to-end system for capture, transmission and rendering
of volumetric video of user representations. To handle multi-view
video processing in a scalable way, we introduce a Multi-point Control
Unit (MCU) to shift processing from end devices into the cloud.
MCUs are commonly used to bridge videoconferencing connections,
and we design and deploy a VR-ready MCU to reduce both upload
bandwidth and end-device processing requirements. In our demo,
we focus on a remote meeting use case where multiple people can
sit around a table to communicate in a shared VR environment

Files

17_TNO_Multi-Sensor Capture and Network Processing for Virtual Reality Conferencing.pdf

Additional details

Funding

VRTogether – An end-to-end system for the production and delivery of photorealistic social immersive virtual reality experiences 762111
European Commission