ULB ChocoFountainBxl
- 1. Université Libre de Bruxelles, Vrije Universiteit Brussels
- 2. Université Libre de Bruxelles
Description
ULB ChocoFountainBxl sequence by LISA ULB
The test sequence "ULB ChocoFountainBxl" is provided by Daniele Bonatto, Sarah Fachada, Mehrdad Teratani and Gauthier Lafruit, members of the LISA department, EPB (Ecole Polytechnique de Bruxelles), ULB (Université Libre de Bruxelles), Belgium.
License
License Creative Commons 4.0 - CC BY 4.0
Terms of Use
Any kind of publication or report using this sequence should refer to the following references.
[1] Daniele Bonatto, Sarah Fachada, Mehrdad Teratani, Gauthier Lafruit, "ULB ChocoFountainBxl", Zenodo, 10.5281/zenodo.5960227, 2022.
@misc{bonatto_chocofountainbxl_2022,
title = {{ULB} {ChocoFountainBxl}},
author = {Bonatto, Daniele and Fachada, Sarah and Teratani, Mehrdad and Lafruit, Gauthier},
publisher = {Zenodo}
month = feb,
year = {2022},
doi = {10.5281/zenodo.5960227}
}
[2] A. Schenkel, D. Bonatto, S. Fachada, H.-L. Guillaume, et G. Lafruit, "Natural Scenes Datasets for Exploration in 6DOF Navigation", in 2018 International Conference on 3D Immersion (IC3D), Brussels, Belgium, déc. 2018, p. 1-8. doi: 10.1109/IC3D.2018.8657865.
@inproceedings{schenkel_natural_b_2018,
address = {Brussels, Belgium},
title = {Natural {Scenes} {Datasets} for {Exploration} in {6DOF} {Navigation}},
isbn = {978-1-5386-7590-8},
url = {https://doi.org/10.1109/IC3D.2018.8657865},
doi = {10.1109/IC3D.2018.8657865},
language = {en},
urldate = {2019-04-11},
booktitle = {2018 {International} {Conference} on {3D} {Immersion} ({IC3D})},
publisher = {IEEE},
author = {Schenkel, Arnaud and Bonatto, Daniele and Fachada, Sarah and Guillaume, Henry-Louis and Lafruit, Gauthier},
month = dec,
year = {2018},
pages = {1--8}
}
Production
Laboratory of Image Synthesis and Analysis, LISA department, Ecole Polytechnique de Bruxelles, Universite Libre de Bruxelles, Belgium.
Content
This dataset contains a dynamic test scene created using the acquisition system described in [2] (3x5 array with a baseline of 10 cm (vertical) and 15cm horizontal).
We provide color corrected [4] 97 frames RGB textures (YUV420p10le format) captured using 15 4k micro studio Blackmagic cameras (3840x2160 pixels @ 30 fps cropped to 3712x2064).
We also provide corresponding depth maps (YUV420p16le format) estimated using MPEG's Immersive Video Depth Estimation (IVDE) [5] and refined using PDR [6].
The scene display two actors interacting with difficult objects to render in view synthesis. In particular the scene contains transparent, specular and smooth areas objects.
The videos were taken in a controlled light environment.
The views are disposed as follow:
v00 | v01 | v02 | v03 | v04 |
v10 | v11 | v12 | v13 | v14 |
v20 | v21 | v22 | v23 | v24 |
In addition to the images and their depth maps, an accurate camera calibration file is provided following the format of [8].
The dataset contains:
- a `camera.json` file in OMAF coordinates system (Camera position: X: forwards, Y:left, Z: up, Rotation: yaw, pitch, roll) [9],
- a `view_synthesis_config.zip` folder containing configuration files for RVS [7,8] to synthesize every view with its closest 4 neighbors in a "plus" configuration,
- a `view_synthesis_results.zip` folder containing videos (scaled to 710x516) corresponding to the configuration files in `view_synthesis_config` and a multiview videos displaying all the results merged together. Views synthesized with RVS [7,8],
- a `vXY_depth_3712x2064_yuv420p16le.zip` Depth maps for every XY view in yuv420p16le format,
- a `vXY_texture_3712x2064_yuv420p10le.zip` RGB textures for every XY view in yuv420p10le format.
References and links
[4] A. Dziembowski, D. Mieloch, S. Różek and M. Domański, "Color Correction for Immersive Video Applications," in IEEE Access, vol. 9, pp. 75626-75640, 2021, doi: 10.1109/ACCESS.2021.3081870.
[5] D. Mieloch, O. Stankiewicz and M. Domański, "Depth Map Estimation for Free-Viewpoint Television and Virtual Navigation", IEEE Access, vol. 8, pp. 5760-5776, 2020, doi: 10.1109/ACCESS.2019.2963487.
[6] D. Mieloch, A. Dziembowski and M. Domański, "Depth Map Refinement for Immersive Video," in IEEE Access, vol. 9, pp. 10778-10788, 2021, doi: 10.1109/ACCESS.2021.3050554.
[7] D. Bonatto, S. Fachada, S. Rogge, A. Munteanu and G. Lafruit, "Real-Time Depth Video-Based Rendering for 6-DoF HMD Navigation and Light Field Displays," in IEEE Access, vol. 9, pp. 146868-146887, 2021, doi: 10.1109/ACCESS.2021.3123529.
[8] S. Fachada, B. Kroon, D. Bonatto, B. Sonneveldt, et G. Lafruit, "Reference View Synthesizer (RVS) 2.0 manual, [N17759]", july. 2018.
[9] S. Fachada, D. Bonatto, M. Teratani, and G. Lafruit, "Intechopen - View Synthesis tool for VR Immersive Video", 2022.
Acknowledgments
[G1] EU project HoviTron, Grant Agreement n$^o$951989 on Interactive Technologies, Horizon 2020.
[G2] Innoviris, the Brussels Institute for Research and Innovation, Belgium, under contract No.: 2015-DS-39a/b & 2015-R-39c/d, 3DLicorneA.
[G3] Sarah Fachada is a Research Fellow of the Fonds de la Recherche Scientifique - FNRS, Belgium.
Files
cameras.json
Files
(11.8 GB)
Name | Size | Download all |
---|---|---|
md5:efe18df8ed86f3fb867bb5788648153c
|
16.0 kB | Preview Download |
md5:805cea82979a5e646148d0642769c247
|
5.3 kB | Preview Download |
md5:11043b18dd92234eb0014fdb322493bb
|
9.1 MB | Preview Download |
md5:1ce416e9088e2f84b94444e3966f03e7
|
133.4 MB | Preview Download |
md5:e3d6f05b2900e95b55d6c9db98078eb2
|
665.3 MB | Preview Download |
md5:35628d07d343408e2125f372b4cdba77
|
82.1 MB | Preview Download |
md5:05a67c885954d45251427ed85ac2fbbd
|
635.7 MB | Preview Download |
md5:23e8964194aea537e2d4ec7986ee974c
|
73.7 MB | Preview Download |
md5:b2eba279417cbb829c8e88c786bc309c
|
657.6 MB | Preview Download |
md5:4816e64b067aff789d27a7785a2a1032
|
76.9 MB | Preview Download |
md5:93cf9c33a62531dfbc49baaaaf39affe
|
663.8 MB | Preview Download |
md5:9df9382b6fb82c26840afe541b9ded04
|
82.4 MB | Preview Download |
md5:fc3347904a675b363e470d90c5468574
|
685.5 MB | Preview Download |
md5:abdb62f9b432f1ae366526a2523e06a7
|
101.9 MB | Preview Download |
md5:052b6aef9c45325b0b5b2c4e9ab13eb6
|
629.5 MB | Preview Download |
md5:ee96359c320aa8b1760d2565c392ca4e
|
74.5 MB | Preview Download |
md5:5d5dde9c7f313c72fbf9dadfd194ef50
|
655.7 MB | Preview Download |
md5:ff9603c696d34fd9430612a0f84d8d98
|
28.2 MB | Preview Download |
md5:c932c808ae1facc87c49920dbee9ff8e
|
660.7 MB | Preview Download |
md5:419f309cac1c9e864c0d902f2c9fad0b
|
83.4 MB | Preview Download |
md5:e9564f8b8c4776bf2bc1f093b63a2551
|
656.8 MB | Preview Download |
md5:0acd380d7e9d0450a32c894f22d4d482
|
109.0 MB | Preview Download |
md5:5112710b12b29ef345880e14aab6fd2b
|
659.3 MB | Preview Download |
md5:a6f885bf0d757d3a3f2bc58cc72b92fc
|
149.7 MB | Preview Download |
md5:70602180eb217a3207c892e8e7c287ef
|
612.4 MB | Preview Download |
md5:eecdab86e22ae9f8c44bc18a2d79e226
|
115.0 MB | Preview Download |
md5:c7bb019d8017fd91027c83d6527c622c
|
624.5 MB | Preview Download |
md5:1ce25ebb656887ba833f7bae2b9a7341
|
68.5 MB | Preview Download |
md5:b5c0a8b99c0646a8b85007e8d7fe253c
|
663.4 MB | Preview Download |
md5:9a13807bcc60cd08390f90af0900af86
|
183.8 MB | Preview Download |
md5:0c33a01a6d59cc638c79cb2420309399
|
631.3 MB | Preview Download |
md5:e3dfd23626ffcad8431127d7413f65ff
|
110.3 MB | Preview Download |
md5:f93e722ac7fc04c3ba5eb37e2685801f
|
658.9 MB | Preview Download |
md5:5c9e675a2f1872d054f84fd7944a2275
|
11.2 kB | Preview Download |
md5:a81f72159bf6d9a376563a87a43b9362
|
524.9 MB | Preview Download |
Additional details
Related works
- Cites
- Journal article: 10.1109/ACCESS.2021.3081870 (DOI)
- Journal article: 10.1109/ACCESS.2019.2963487 (DOI)
- Journal article: 10.1109/ACCESS.2021.3050554 (DOI)
- Journal article: 10.1109/ACCESS.2021.3123529 (DOI)
- Continues
- Conference paper: 10.1109/IC3D.2018.8657865 (DOI)
Funding
References
- A. Dziembowski, D. Mieloch, S. Różek and M. Domański, "Color Correction for Immersive Video Applications," in IEEE Access, vol. 9, pp. 75626-75640, 2021, doi: 10.1109/ACCESS.2021.3081870.
- D. Mieloch, O. Stankiewicz and M. Domański, "Depth Map Estimation for Free-Viewpoint Television and Virtual Navigation", IEEE Access, vol. 8, pp. 5760-5776, 2020, doi: 10.1109/ACCESS.2019.2963487.
- D. Mieloch, A. Dziembowski and M. Domański, "Depth Map Refinement for Immersive Video," in IEEE Access, vol. 9, pp. 10778-10788, 2021, doi: 10.1109/ACCESS.2021.3050554.
- D. Bonatto, S. Fachada, S. Rogge, A. Munteanu and G. Lafruit, "Real-Time Depth Video-Based Rendering for 6-DoF HMD Navigation and Light Field Displays," in IEEE Access, vol. 9, pp. 146868-146887, 2021, doi: 10.1109/ACCESS.2021.3123529.
- S. Fachada, B. Kroon, D. Bonatto, B. Sonneveldt, et G. Lafruit, "Reference View Synthesizer (RVS) 2.0 manual, [N17759]", july. 2018.
- A. Schenkel, D. Bonatto, S. Fachada, H.-L. Guillaume, et G. Lafruit, "Natural Scenes Datasets for Exploration in 6DOF Navigation", in 2018 International Conference on 3D Immersion (IC3D), Brussels, Belgium, déc. 2018, p. 1-8. doi: 10.1109/IC3D.2018.8657865.