Published February 2, 2021 | Version 1.1
Dataset Open

Transparent Magritte Test Sequence

  • 1. Université Libre de Bruxelles

Description

 Transparent-Magritte sequence by LISA ULB

The test sequence "Transparent Magritte" is provided by Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, Gauthier Lafruit, members of the LISA department, EPB (Ecole Polytechnique de Bruxelles), ULB (Universite Libre de Bruxelles), Belgium.

 

 License:

CC BY-NC-SA

 

 Terms of Use:

Anykind of publication or report using this sequence should refer to the following references.


[1] Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, Gauthier Lafruit, "Transparent Magritte Test Sequence", 2021.

@misc{fachada_transparent_2021,
    title = {Transparent {Magritte} {Test} {Sequence}},
    author = {Fachada, Sarah and Bonatto, Daniele and Teratani, Mehrdad and Lafruit, Gauthier},
    month = feb,
    year = {2021},
    doi = {10.5281/zenodo.5048275}
}

[2] Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit, "Light Field Rendering for non-Lambertian Objects," presented at the Electronic Imaging, 2021.

@inproceedings{fachada_light_2021,
    title = {Light {Field} {Rendering} for non-{Lambertian} {Objects}},
    booktitle = {Electronic {Imaging}},
    author = {Fachada, Sarah and Bonatto, Daniele and Teratani, Mehrdad and Lafruit, Gauthier},
    year = {2021}
}

 Production

Laboratory of Image Synthesis and Analysis, LISA department, EPB, Universite Libre de Bruxelles, Belgium.

 

 Content:

This dataset contains a test scene created and rendered with Blender [1] and the addon script [2] extended for Blender 2.8. We provide the Bblender file and the rendered scene.

The scene contains a transparent refractive torus rendered in a regular camera array of 21x21 cameras.
In addition to the 3D model, two folders are available:
 -  `centered_cameras` resolution of 1000x1000, the cameras are centered on the refractive torus.
 -  `parallel_cameras` resolution of 2000x2000, the cameras are parallel, with a principal point at the center of the image.
Each of these folders contains:
 - a `camera.json` file in OMAF coordinates system (Camera position: X: forwards, Y:left, Z: up, Rotation: yaw, pitch, roll) [3],
 - a `parameters.cfg` generated with [2],
 - a `texture` folder containing the rendered views in png format,
 - a `depth` folder containing the associated depth maps in exr format.
 
 References and links:

[1] Blender Online Community, "Blender - a 3D modelling and rendering package." Blender Institute, Amsterdam: Blender Foundation, 2020.

[2] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, "A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields" in Asian Conference on Computer Vision, 2016,

https://github.com/lightfield-analysis/blender-addon

https://github.com/dbonattoj/blender-addon

[3] B. Kroon, "Reference View Synthesizer (RVS) manual [N18068]," ISO/IEC JTC1/SC29/WG11, Macau SAR, China, p. 19, Oct. 2018.

https://mpeg.chiariglione.org/standards/mpeg-i/omnidirectional-media-format

Notes

Acknowledgments: This work was supported by: Les Fonds de la Recherche Scientifique - FNRS, Belgium, under Grant n°3679514$, ColibriH, The European Commision project n°951989 on Interactive Technologies, H2020-ICT-2019-3, Hovitron. Sarah Fachada is a Research Fellow of the Fonds de la Recherche Scientifique - FNRS, Belgium

Files

blender.zip

Files (5.0 GB)

Name Size Download all
md5:1208c96290aaf75c9c62375951684133
5.6 MB Preview Download
md5:5733d86cfe54ebbfc291d42f5adc8f1d
1.0 GB Preview Download
md5:2ed5f20668bff3469f2456658fa6b4f8
3.9 GB Preview Download
md5:ae96ccb44dedd4ab4c8306f54e77fd61
2.9 kB Preview Download

Additional details

Funding

HoviTron – Holographic Vision for Immersive Tele-Robotic OperatioN 951989
European Commission