Published January 31, 2023 | Version v1
Report Open

Model vs System Level Testing of Autonomous Driving Systems: A Replication and Extension Study

Description

Offline model-level testing of autonomous driving software is much cheaper, faster, and diversified than in-field, online system-level testing. Hence, researchers have compared empirically model-level vs system-level testing using driving simulators. They reported the general usefulness of simulators at reproducing the same conditions experienced in-field, but also some inadequacy of model-level testing at exposing failures that are observable only in online mode. 

In this work, we replicate the reference study on model vs system-level testing of autonomous vehicles while acknowledging several assumptions that we had reconsidered. These assumptions are related to several threats to validity affecting the original study that motivated additional analysis and the development of techniques to mitigate them. Moreover, we also extend the replicated study by evaluating the original findings when considering a physical, radio-controlled autonomous vehicle.

Our results show that simulator-based testing of autonomous driving systems yields predictions that are close to the ones of real-world datasets when using neural-based translation to mitigate the reality gap induced by the simulation platform. On the other hand, model-level testing failures are in line with those experienced at the system level, both in simulated and physical environments, when considering the pre-failure site, similar-looking images, and accurate labels. 

Files

TR-Precrime-2023-03.pdf

Files (5.5 MB)

Name Size Download all
md5:49acb628deedafeb5bdb3a7774c7b8a4
5.5 MB Preview Download

Additional details

Related works

Is published in
10.1007/s10664-023-10306-x (DOI)

Funding

PRECRIME – Self-assessment Oracles for Anticipatory Testing 787703
European Commission