Published June 30, 2021 | Version 1.0
Dataset Open

Mirror Magritte Torus Test Sequence

Description

 # Mirror-Magritte-Torus sequence by LISA ULB


The test sequence "Mirror Magritte Torus" is provided by Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, Gauthier Lafruit, members of the LISA department, EPB (Ecole Polytechnique de Bruxelles), ULB (Universite Libre de Bruxelles), Belgium.

 # License:


CC BY-NC-SA

 # Terms of Use:


Anykind of publication or report using this sequence should refer to the following references.

[1] Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, Gauthier Lafruit, "Mirror Magritte Torus Test Sequence", 2021.

@misc{fachada_mirror_2021,
    title = {Mirror {Magritte} {Torus} {Test} {Sequence}},
    author = {Fachada, Sarah and Bonatto, Daniele and Teratani, Mehrdad and Lafruit, Gauthier},
    month = feb,
    year = {2021},
    doi = {
10.5281/zenodo.5048262}
}

[2] Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit, "Light Field Rendering for non-Lambertian Objects," presented at the Electronic Imaging, 2021.

@inproceedings{fachada_light_2021,
    title = {Light {Field} {Rendering} for non-{Lambertian} {Objects}},
    booktitle = {Electronic {Imaging}},
    author = {Fachada, Sarah and Bonatto, Daniele and Teratani, Mehrdad and Lafruit, Gauthier},
    year = {2021}
}

 # Production:


Laboratory of Image Synthesis and Analysis, LISA department, EPB, Universite Libre de Bruxelles, Belgium.

 # Content:


This dataset contains a test scene created and rendered with Blender [1] and the addon script [2] extended for Blender 2.8. We provide the Blender file and the rendered scene.

The scene contains a mirror reflective torus rendered in a regular camera array of 21x21 cameras.

In addition to the 3D model, we provide the images in the folder `parallel_cameras` : resolution of 2000x2000, the cameras are parallel, with a principal point at the center of the image.

The dataset contains:
 - a `camera.json` file in OMAF coordinates system (Camera position: X: forwards, Y:left, Z: up, Rotation: yaw, pitch, roll) [3],
 - a `parameters.cfg` generated with [2],
 - a `texture` folder containing the rendered views in png format,
 - a `depth` folder containing the associated depth maps in exr format.
 
 
 # References and links:
 
[1] Blender Online Community, "Blender - a 3D modelling and rendering package." Blender Institute, Amsterdam: Blender Foundation, 2020.

[2] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, "A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields" in Asian Conference on Computer Vision, 2016,
https://github.com/lightfield-analysis/blender-addon
https://github.com/dbonattoj/blender-addon

[3] B. Kroon, "Reference View Synthesizer (RVS) manual [N18068]," ISO/IEC JTC1/SC29/WG11, Macau SAR, China, p. 19, Oct. 2018.
https://mpeg.chiariglione.org/standards/mpeg-i/omnidirectional-media-format

 

 

Notes

Acknowledgments: This work was supported by: Les Fonds de la Recherche Scientifique - FNRS, Belgium, under Grant n°3679514$, ColibriH, The European Commision project n°951989 on Interactive Technologies, H2020-ICT-2019-3, Hovitron. Sarah Fachada is a Research Fellow of the Fonds de la Recherche Scientifique - FNRS, Belgium

Files

blender.zip

Files (3.9 GB)

Name Size Download all
md5:711e8ba8d862b8dbc1248eabee653549
5.6 MB Preview Download
md5:239ad360302cb7e0632fdbb79377ca81
3.9 GB Preview Download
md5:ac25f78f3f36f98797d86f742b310650
2.8 kB Preview Download

Additional details

Funding

HoviTron – Holographic Vision for Immersive Tele-Robotic OperatioN 951989
European Commission