There is a newer version of the record available.

Published December 1, 2022 | Version v2
Dataset Open

READ: Large-Scale Neural Scene Rendering for \\ Autonomous Driving

  • 1. Zhejiang University

Description

With the development of advanced driver assistance systems~(ADAS) and autonomous vehicles, conducting experiments in various scenarios becomes an urgent need. Although having been capable of synthesizing photo-realistic street scenes, conventional image-to-image translation methods cannot produce coherent scenes due to the lack of 3D information. In this paper, a large-scale neural rendering method is proposed to synthesize the autonomous driving scene~(READ), which makes it possible to generate large-scale driving scenes in real time on a PC through a variety of sampling schemes. In order to effectively represent driving scenarios, we propose an ω rendering network to learn neural descriptors from sparse point clouds. Our model can not only synthesize photo-realistic driving scenes but also stitch and edit them. The promising experimental results show that our model performs well in large-scale driving scenarios.

 

Source Code: https://github.com/JOP-Lee/READ

Files

camera.xml

Files (836.8 MB)

Name Size Download all
md5:7bd083eadb43d7ac9aac8beefc25578a
771.9 kB Preview Download
md5:510baf6c3e6184eae400107053ea099a
591.2 MB Preview Download
md5:702092998c62252e4cbf226b516aec0e
173.5 MB Preview Download
md5:c65d0c63161609900839abdf90065265
71.3 MB Download