Multi-sensor capture and network processing for virtual reality conferencing
Description
Recent developments in key technologies like 5G, Augmented and
Virtual Reality (VR) and Tactile Internet result into new possibilities
for communication. Particularly, these key digital technologies can
enable remote communication, collaboration and participation in
remote experiences. In this demo, we work towards 6-DoF photorealistic
shared experiences by introducing a multi-view multisensor
capture end-to-end system. Our proposed system acts as a
baseline end-to-end system for capture, transmission and rendering
of volumetric video of user representations. To handle multi-view
video processing in a scalable way, we introduce a Multi-point Control
Unit (MCU) to shift processing from end devices into the cloud.
MCUs are commonly used to bridge videoconferencing connections,
and we design and deploy a VR-ready MCU to reduce both upload
bandwidth and end-device processing requirements. In our demo,
we focus on a remote meeting use case where multiple people can
sit around a table to communicate in a shared VR environment
Files
17_TNO_Multi-Sensor Capture and Network Processing for Virtual Reality Conferencing.pdf
Files
(1.0 MB)
Name | Size | Download all |
---|---|---|
md5:e75a3d685c617b22d37b8fa4590f83f8
|
1.0 MB | Preview Download |