Published July 11, 2024 | Version v1
Conference paper Open

Semi-Automated Digital Human Production for Enhanced Media Broadcasting

Description

The final version of the paper published by IEEE is available online at https://doi.org/10.1109/GEM61861.2024.10585601

Abstract

In recent years, the application of synthetic humans in various fields has attracted considerable attention, leading to extensive exploration of their integration into the Metaverse and virtual production environments. This work presents a semi-automated approach that aims to find a fair trade-off between high-quality outputs and efficient production times. The project focuses on the Rai photo and video archives to find images of target characters for texturing and 3D reconstruction with the goal of reviving Rai's 2D footage and enhance the media experience. A key aspect of this study is to minimize the human intervention, ensuring an efficient, flexible, and scalable creation process. In this work, the improvements have been distributed among different stages of the digital human creation process, starting with the generation of 3D head meshes from 2D images of the reference character and then moving on to the generation, using a Diffusion model, of suitable images for texture development. These assets are then integrated into the Unreal Engine, where a custom widget facilitates posing, rendering, and texturing of Synthetic Humans models. Finally, an in-depth quantitative comparison and subjective tests were carried out between the original character images and the rendered synthetic humans, confirming the validity of the approach.

Files

IEEE_GEM___Digital_human_production_zenodo.pdf

Files (3.6 MB)

Name Size Download all
md5:ed018f741b7f93f8364b2221874e50c2
3.6 MB Preview Download

Additional details

Funding

European Commission
XRECO - XR mEdia eCOsystem 101070250