Spatial visualization of sensor information for automated vehicles

Displaying the sensor limitation of automated vehicles is crucial to traffic safety and trust in automation. However, the current representation of system uncertainty is quite general with symbols or scales consisting of uncertainty levels, which is problematic in critical situations where drivers need to know the specific problem of the sensors. An interface that visualizes the radar sensor information spatially considering the surroundings is proposed, which aims to provide a better mental representation of the situation and support drivers' decisions. It is evaluated against two reference interfaces with either no or general representation of the sensor information. After seeing different interfaces in various scenarios of overtaking obstacles, participants selected one of the following options: "stop", "circuit" or "take over the control". The results show that although the interface showing no sensor information has the shortest reaction time, the proposed interface has changed drivers' decisions from "circuit" to "take over the control" the most.


Introduction
The appearance of Google cars, as well as the production of automated vehicles (AV) by Tesla gives us the confidence that automated driving will be realized in the near future. The benefits of AV include traffic safety, efficiency, and convenience for drivers. However, these promised benefits much depend on the reliability of sensors (cameras, radar sensors, and lidar sensors) of AV. When sensors reach their limits in perceiving the environment and become uncertain, crashes will be likely to occur due to the lack of communication about it to drivers on time, especially in critical situations. For instance, the sensors of the autonomous Tesla car reached their limits due to a heavy snowfall, which made it difficult to distinguish the white truck in front from the white snow background and then led to a collision [10]. As the Tesla car had not reported its sensor limitation to the driver on time, the driver could not take over the control and avoid the crash. It is therefore crucial to develop transparent interfaces that display sensor information to provide traffic safety and build drivers' trust in automation [9]. The visualization of the automation uncertainty has been reported to improve situation awareness (SA) [17,2] and trust in automation [5], where automation uncertainty is presented with either levels of uncertainty or emotional symbols in a general way. However, this general representation of system uncertainty may be not sufficient for drivers to make appropriate decisions for problems related to the specific automation function. Instead, they need extra time to figure out the exact problem of automation function, which may lead to crashes in critical situations. To solve this, a spatial visualization of the sensor information is proposed, with the expectation that drivers can understand the status of the sensors and make appropriate decisions effectively in critical situations. The radar sensor is one of the important sensors for AV, which can detect the distance as well as the speed of the vehicles in the near. In this paper, we mainly aim to design an interface visualizing the radar sensor information of AV spatially. It is then evaluated regarding driver performance and trust in automation against two reference interfaces without representing the status of the radar sensor specifically.

Interface Displaying Sensor Information
Based on the literature [14,13,4,8] and expert interviews, the proposed user interface (UI)2 is designed. The range and the status of the front and back radar sensors are visualized separately with triangles and colors, considering the ego vehicle and its surroundings (see Figure. 1). This information of the surroundings in UI2 is supposed to enhance the drivers' mental representations and support drivers' decision-making in complex tasks like overtaking obstacles [1]. With the red color displaying automation uncertainty and the green color representing the information certainty of the system, the status of the radars can be directly interpreted by drivers, supporting drivers to reach the SA level 2 [3]. Besides, two reference interfaces (UI0, UI1) are designed. UI0 does not display any sensor information, which will be considered as a baseline in the experiment (see Figure. 3). In accordance with the general representation of system uncertainty used by [5] and [2], a reference UI1 is designed representing the status of the front and back radar sensor with correctness symbols (tick/cross) (see Figure. 2). It is noticed that UI2 incorporates UI1 (see the left part in Figure. 1), in order to compare the effect of the spatial visualization of radar sensor information between UI1 and UI2. These three UIs were designed in Axure [6] and used in the following evaluation study.

Experiment
To test the influence of the UI type on the driver performance and trust in automation, an evaluation study has been conducted with 17 participants (7 females), who had a valid driving license at an average age of 24.72 years old (19-28 years). It is assumed that UI2 offers more support for drivers' decision-making process and will be trusted more than UI0 and UI1.

Material
Overtaking obstacles on a two-lane rural road was used as a scenario. In the beginning, the ego vehicle drove at 100 km/h in the right lane, and an obstacle was located 50 m ahead. When the ego vehicle further approached the obstacle, there were three scenarios that differ regarding the appearance of the oncoming vehicle or the vehicle behind the ego vehicle: 1) No vehicles were observed in the surroundings; 2) An oncoming vehicle was approaching in the left lane; 3) The vehicle behind the ego vehicle was starting to take over. These scenarios were then edited into three videos with a length of 15 s each, using Vicom Editor [19].

Procedure
Participants sat in front of a monitor with a size of 22.9 inches and a keyboard. After signing the consent form and filling out the demographic questionnaire, they were asked to start the experiment which was programmed using GNU Octave [15] and Psychtoolbox-3 [16]. The experiment started with one video being displayed on the monitor. At the end of the video, a beep sound was played to indicate the transition to the presentation of one of the UIs on the same monitor simultaneously. After being informed of the obstacle in front in each UI, participants were required to make decisions by pressing buttons 1,2, or 3 on the keyboard, which corresponded to action options 1 (1=stop the car, 2=circuit the obstacle and 3=take over the control) (see Figure. 1-3). Once the action selection has been made, the selected option was highlighted in blue. Participants started the next trial by pressing the space bar on the keyboard. A within-subject design was used and each participant completed a total of 108 trials (36 trials for each UI across all scenarios). The three scenarios were randomly and equally displayed within the three UI conditions. The UI blocks were counterbalanced. After finishing each block with one UI, participants were asked to fill the trust, acceptance as well as usability questionnaires [12,7,18,11].

Results
The data of two participants had to be excluded due to the technical problems, resulting in 1620 (15*108) trials that were analyzed in SPSS 24. Figure. 4 demonstrates the influence of UI type on the reaction time, showing that the reaction time while using the UI0 seems to be the shortest. The Friedman test was conducted to see the effect of the UI type on the reaction times. It is found that there is a statistically significant effect of type    Figure. 5 shows the influence of UI type and type of chosen action on the number of selected actions. It is observed that the choice of the second action and third action has changed more with UI2 than UI0 and UI1. A two-way repeated measure of Analysis of Variance (ANOVA) was conducted, showing the interaction between type of UI and type of chosen action is statistically significant (F(2.448, 34.269) = 4.900, p = .009). Additional paired samples t-tests show that the option "circuit the obstacle" is less chosen on UI2 than UI0 and UI1, while the option "take over the control" is more chosen on UI2 than UI0.

Numbers of Selected Actions
Trust, Acceptance and Usability A one-way repeated measure of ANOVA was conducted to compare the effect of the UI type on two trust questionnaires [12,7]. There is not a statistically significant main effect of UI type on the trust scores in [12] (F(2, 28) = .795, p = .461) and on the trust scores (see

Discussion
Regarding reaction times, the proposed UI2 does not show an advantage compared to UI0. The interpretation could be that compared to the UI0 without representing any sensor information, the displayed amount of information on UI2 itself purely requires more time to be interpreted. With reference to the selected actions, participants have chosen "take over the control" more often with UI2 than the other UIs, which indicates that the more participants know about the status of the sensors, the more they tend to take the control themselves. To further explore which variation of sensor status (4 variations) in UI2 has more influence on the numbers of selected actions, only a descriptive statistic has been done due to limited numbers of trials. It shows that the two variations of sensor status, in which one sensor is certain while the other is uncertain, have exerted a stronger influence on the action choice rate than the variations in which both sensors show the same (un-)certainty. This implies that once participants know either one of the sensors does not work, they will be more uncertain and consequently prefer to "take over the control".

Future Work
The limitation of the current work is that the influence of system uncertainty on drivers' decisions has been investigated only with a monitor and a keyboard. In the future, the influence of the presentation of certain or uncertain sensor information on trust in automation needs to be investigated systematically in the driving simulator. Moreover, criticality should be taken into account. In addition, it should be studied how to enhance appropriate trust in the proposed user interface by presenting sufficiently specific sensor information on the one hand, while on the other hand not overwhelming drivers with information. Last but not least, augmented reality can be considered as a possible supplement to the proposed interface that visualizes the sensor information spatially.