Using an Artificial Agent as a Behavior Model to Promote Assistive Technology Acceptance

. Despite technological advancements in assistive technologies, studies show high rates of non-use. Because of the rising numbers of people with disabilities, it is important to develop strategies to increase assistive technology acceptance. The current research investigated the use of an artificial agent (embedded into a system) as a persuasive behavior model to influence individuals’ technology acceptance beliefs. Specifically, we examined the effect of agent-deliv-ered behavior modeling vs. two non-modeling instructional methods (agent-de-livered instructional narration and no agent, text-only instruction) on individuals’ computer self-efficacy and perceived ease of use of an assistive technology. Overall, the results of the study confirmed our hypotheses, showing that the use of an artificial agent as a behavioral model leads to increased computer self-efficacy and perceived ease of use of a system. The implications for the inclusion of an artificial agent as a model in promoting technology acceptance are discussed.


Introduction
Today's world runs on computers. It is difficult to imagine life without access to the internet or being able to communicate and share experiences with other people in electronic social media. However, individuals with physical disabilities face serious challenges in operating computers, which negatively affect their opportunities for employment, social inclusion, and independence. Although various Assistive Technologies (ATs) exist and are becoming more and more technologically advanced, the literature still warns about high rates of AT non-use [1]. Since the number of potential AT users is currently very high and is expected to continue growing during the years to come [2], strategies that aim at increasing AT acceptance are required. Earlier research has shown that technology acceptance is dependent to a large extent on factors related to individual beliefs and attitudes towards a system [3,4]. Persuasive Technologies (for an overview, see [5]) could be a key solution to AT acceptance. Though persuasive technology can take on many roles, findings suggest that it can be more persuasive when it takes on the form of a social agent [6]. This is, artificial agents (on-screen animated characters) might be very powerful technological persuaders, due to their ability to simulate social interaction [7]. In the current study, we argue that an artificial agent, embedded into an AT, could promote AT acceptance by influencing its underlying constructs.
The identification of the constructs associated with technology acceptance has received much attention. A major construct that is linked to AT adoption, is computer self-efficacy (one's belief about his/her ability to perform a specific computer activity) [8]. This construct has its origins in Bandura's social cognitive theory [9], where selfefficacy is defined as "people's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances" (p391). Due to the idiosyncratic nature of self-efficacy judgments to particular domains, a distinction has been drawn between general computer self-efficacy (one's judgments of efficacy across multiple computer application domains), and, specific computer self-efficacy (one's perceptions of ability to perform specific computer-related tasks) [10,11]. Overall, the basic principle behind self-efficacy theory is that individuals are more likely to engage in activities for which they have high self-efficacy and less likely to engage in those they do not. Similarly, those with higher levels of computer self-efficacy would believe themselves capable of taking on a wide range of challenging computer tasks and successfully complete them.
Besides individuals' beliefs of their own abilities, an AT itself has unique features that could encourage or impede its acceptance. Indeed, it has been acknowledged that an AT could be fully used only if an easy and intuitive way of using is secured. However, technological advancements alone do not increase the easiness of AT usage. The Technology Acceptance Model (TAM), a widely used theoretical model examining individual reactions towards computing technology, recognized users' perceived ease of use of a specific system as one of the two beliefs (together with perceived usefulness) that drive individuals' intention to use a system [12]. Specifically, perceived ease of use has been defined as the degree to which the prospective user expects the target system to be free of effort. Computer self-efficacy has been found to be a major determinant of perceived ease of use (with specific computer self-efficacy to be a more proximal predictor [13]), especially in the absence of any direct experience with a system [12].
Training has been suggested as one of the most important interventions to enhance constructs of AT acceptance, during the early stage of an AT use (see e.g. [14] and [15]). Behavior modeling has been found to be a very powerful instructional method across a diverse range of behavioral domains, including the adoption of technological innovations. This concept, originated from Bandura's social-cognitive theory, posits that much of our learning derives from vicarious experience and advocates the concept of modeling in which a person (the so-called 'model') demonstrates and explains how to solve a given problem [9,10]. One of the principal mechanisms by which behavior modeling operates is self-efficacy. Research on behavior modeling in computer training indicated that behavior modeling yields higher scores of computer selfefficacy and subsequently better task performance, compared to other non-modeling instructional methods, such as a lecture-based instruction and self-manual [14,16]. Though it is suggested that behavior modeling would be an effective method to influence perceived of ease of use (due to its impact on individuals' self-efficacy), this has not been empirically tested.
Despite the fact that human models have been found to strongly influence people's beliefs, human instructors are not always available [17]. In this study, we argue that artificial behavior models could be as effective as persuasive human models in enhancing individuals' computer self-efficacy and (subsequently) perceived ease of use of an AT, due to their ability to simulate human-human interaction [7]. The potential of replacing human models with artificial ones has received some attention [i.e., 18,19]. Nonetheless, to our knowledge, earlier literature provides no direct evidence that behavior modeling by an artificial agent can enhance beliefs, such as computer self-efficacy and perceived ease of use.

Current work
In the current study, we investigated whether an agent that models an AT-related behavior (i.e., demonstration and verbal instruction) can enhance individuals' computer self-efficacy and perceived ease of use of this AT, as compared to other non-modeling instructional methods. We further tested whether computer self-efficacy mediates the effect (if any) of the type of instructional method on perceived ease of use, as suggested by earlier literature [e.g., 12]. The test our hypotheses, we compare the agent-delivered behavior modeling condition to two, frequently used, non-modeling instructional methods: an agent-delivered instructional narration (behavior modeling absent) and a no-agent, text-only instruction (i.e., both behavior modeling and agent being absent). Specifically, the agent-delivered instructional narration condition (i.e., lecturing) contains an agent that only provides verbal instructions on how to use an AT, while the AT-related features are presented in a slideshow. The no-agent, text-only instruction condition (i.e., user manual) does not contain an on-screen agent. Instead, it includes the AT instructions in a written form, accompanied by a slideshow of the AT-related features.
We predicted that an agent-delivered behavior modeling will be more effective in enhancing individuals' computer self-efficacy beliefs (H1), and, perceptions of ease of use (H2), as compared to the non-modeling methods. Moreover, we expected computer self-efficacy to mediate the effect of the type of instructional method on perceived ease of use (H3).
In line with recommendations of earlier studies about self-efficacy beliefs being situation-specific, we examined the impact of agent-delivered behavior modeling on specific computer-self efficacy. Nonetheless, due to the fact that general computer self-efficacy has been found to impact specific computer self-efficacy, we examined our hypotheses, controlling for the effect of the general computer self-efficacy on both dependent variables.

Participants and design
A total of 197 individuals participated in the study. The participants were recruited using a local participant database, and most of them were students from Eindhoven University of Technology. Of these participants, 122 (61.9%) were males and 74 (37.6%) were females (one person did not answer the question about gender). The age of the sample ranged from 19 to 29, with a mean age of 23 (SD = 2.4). One-hundred fifteen participants were educated to undergraduate level or higher, and 77 had completed high school (5 persons did not state their educational background). The vast majority of the participants (95.5%), reported using computers on a daily basis, with a computer use frequency for more than 12 hours per week (82.5%). The average general computer self-efficacy of the population was high (M = 5.5, SD = 0.7), which is in line with the participants' stated extensive computer use. Nevertheless, more than half of the participants (63.5%) reported no previous experience with using assistive computer technologies (i.e., software and/or hardware).
The study employed a between-subjects design, with the participants being randomly assigned to one of the three experimental conditions: an agent-delivered modeling, an agent-delivered instructional narration and a no-agent, text-only instruction. We interviewed the first 10 participants after the debriefing to evaluate the success of our experimental manipulation, and we found support that the three instructional methods were successfully recognized as they were intended. The study's dependent variables were specific computer self-efficacy and perceived ease of use. Inclusion criteria were participants' fluency in English. Overall, the duration of the study was approximately 20 minutes, for which participants received 5€, as compensation for their participation.

Apparatus
The content of the instruction in the current study pertained to an eye-tracking software, called GazeTheWeb (GTW). GTW is a web-browser, developed to be controlled solely with the eyes, using an eye-tracking hardware (see [20]). The 3D animated artificial agent, implemented in this study, was created using the CrazyTalk 8 software (https://www.reallusion.com/crazytalk/).

Artificial agent
The agent was designed to resemble participants' characteristics in terms of appearance, according to the guidelines derived from earlier literature [18,19]. Since the majority of the participants were young Dutch students, the agent was designed to be young (~25 years), attractive (as manipulated by the agent's facial features) and "cool" (as manipulated by the agent's clothing and hairstyle).

Materials
The main dependent variable for the first hypothesis was specific computer self-efficacy. Specific computer self-efficacy was assessed by asking participants to answer 5 self-constructed questions regarding their perceived ability to perform the necessary steps of the instructed computer task, using GTW. Specifically, to develop measures for the specific computer self-efficacy construct, the recommendations provided by past work were closely followed 1 [11,13]. Participants could answer these 5 questions, by choosing an option on a 7-point rating scale, ranging from 1 (strongly disagree) to 7 (strongly agree). We constructed a reliable measure (Cronbach's α = 0.80) of specific computer self-efficacy by averaging participants' answers to this set of questions. General computer self-efficacy was assessed by asking participants to answer 8 questions regarding their perceived ability to use unfamiliar computer technologies in general. This 8-item scale was originally created by [11]. Participants could answer these questions by choosing an option on a 7-point rating scale, ranging from 1 (strongly disagree) to 7 (strongly agree). We constructed a reliable measure of general computer self-efficacy (Cronbach's α = 0.75) by averaging participants' answers to this set of questions.
The main dependent variable for the second hypothesis was system-specific perceived ease of use. Perceived ease of use was assessed by asking participants to answer 4 questions regarding their personal evaluation of the mental effort that is needed to use GTW. This 4-item scale was originally created by [21,22]. Participants could answer these questions by choosing an option on a 7-point rating scale, ranging from 1 (strongly disagree) to 7 (strongly agree). We constructed a reliable measure of perceived ease of use (Cronbach's α = .81) by averaging participants' answers to this set of questions.
For exploratory reasons we also assessed whether there was any effect of the two agent-delivered instructions methods on participants' judgments about qualities of the artificial agent. The "Godspeed" questionnaire [23] was used to measure three key concepts of Human-Computer interaction, namely, anthropomorphism, animacy, and likability. This questionnaire was administered in a 7-point semantic differential, scale. We constructed reliable measures of anthropomorphism (Cronbach's α = .81), animacy (Cronbach's α = .91) and likeability (Cronbach's α = .82) by averaging participants' answers to each set of questions.
Lastly, demographic questions of age, gender, education, and level of computer use were asked.

Procedure
Participants were welcomed in the central hall of the lab building. Each participant was asked to read and sign an informed consent form, stating the general purpose of the research and their willingness to participate in this study. Then, participants were randomly assigned to one of the 3 outlined experimental conditions and they were asked to watch an instructional video (split into two screens) on how to perform a web search using GTW. It was while the participants watched the video that the manipulation of the agent-delivered modeling took place. In more detail, the video in the agent-delivered modeling condition was split into the following two screens: on the right-hand side, an artificial agent appeared to use the GTW system to demonstrate a computer task (e.g. web search) by moving the head and eyes, while verbally explaining the system features involved in such a task; the lefthand side of the screen contained a display of the system, exposing participants to the progressive effects of the agent's web search actions in real time (see a, Figure 1).
The video in the agent-delivered instructional narration condition was split into the following two screens: on the right-hand side, the (same) artificial agent appeared to be motionless, with his main function being the provision of (the same) verbal instructions on how to conduct a web search using GTW (i.e., explaining the task-related features of the system); the left-hand side of the screen contained a display of the system, exposing participants to progressive screenshots of the system with labels highlighting the commands the verbal explanation was referring at every time (see b, Figure  1).
Finally, the no-agent text-only instruction was identical to the agent-delivered instructional narration, with the only difference being that the left-hand side of the screen contained a text-box, with written instructions. Thus, participants in this condition were provided with the same system instructions, but they could not see or listen to the agent. The left side of the screen was identical to the agent-delivered instructional narration condition (i.e. labels highlighting the system's commands) (see c, Figure 1).
After the end of the instructional videos, participants were requested to answer an online questionnaire. Lastly, they were debriefed, paid and thanked for their contribution. a b c Fig. 1. Different types of instructional methods: (a) Agent-delivered behavior modeling; the agent tilts the head to focus its gaze to the system feature, which, as a result of this action, becomes activated (blue button on the left-hand side) (b) Agent-delivered instructional narration; the agent is motionless while explaining the system feature, which is highlighted in the left-hand side screenshot (c) No-agent, text-only instruction; the agent has been substituted by a text-box, which provides instructions of the function of the system feature, highlighted in the left-hand side screenshot.

Results
Specific computer self-efficacy: A one-way analysis of covariance (ANCOVA) was conducted to determine the effect of the type of instructional method on participants' specific computer self-efficacy, after controlling for their general computer self-efficacy 2 . Results showed that the covariate general computer self-efficacy was significantly related to the specific computer self-efficacy, F(1, 193) = 38.68, p < .001, ηp² = . 16. After controlling for general computer self-efficacy, the significant main effect of the type of instruction on specific computer self-efficacy remained, F(2, 193) = 6.83, p < .01, ηp² = .06. Planned contrasts revealed that specific computer self-efficacy was significantly higher for the participants in the agent-delivered modeling condition (N = 66, M = 6.1, SD = .8), as compared to the participants in the agent-delivered instructional narration condition (N = 66, M = 5.6, SD = .9), t(193) = -3.48, p < .01, and as compared to participants in the text-only instruction condition (N = 65, M = 5.7, SD = .9), t(193) = -2.82, p < .01. No significant difference was found between participants in the two non-modeling conditions after controlling for general self-efficacy. Perceived ease of use: A one-way ANCOVA was conducted to determine the effect of the type of instructional method on participants' perceived ease of use, after controlling for their general computer self-efficacy 3 . Results demonstrated that the covariate general computer self-efficacy was significantly related to perceived ease of use Similarly, no significant difference was found between participants in the two non-modeling conditions after controlling for general self-efficacy.
Judgments of the agent's qualities: For exploratory purposes, a one-way multivariate analysis of variance (MANOVA) was conducted to examine whether the agent functioning as a behavior model while providing more social cues, would affect individuals' judgments about the agent's qualities of likeability, animacy, and anthropomorphism, as compared to the agent functioning as a verbal instructor only. The results revealed a statistically significant MANOVA effect of the type of agents' instructional method on the three dependent variables combined, Wilk's Λ = .926, F (3, 128) = 3.401, p=0.02, ηp² = .074. A series of one-way ANOVA's on each of the three dependent variables was conducted as follow-up tests to the MANOVA. We found a significant difference between participants in the two conditions on their agent's likability judgments, F(1,130) = 5.50, p = .02, ηp² = .041, with participants' liking of the agent to be higher in the agent-delivered behavior modeling condition (N = 66, M = 3.8, SD = 1.1), as compared to the agent-delivered instructional narration condition (N = 66, M = 3.3, SD = 1.0). Findings showed no evidence for a significant difference between participants in the two conditions on agent's animacy judgments, F(1,130) = .08, p = .77, ηp² = .001, as also, on agent's anthropomorphism judgments, F(1,130) = 2.32, p = .13, ηp² = .018.
Mediation effects on perceived ease of use: Our aim was to test whether specific computer self-efficacy could explain part of the anticipated effect of the type of instructional method on perceived ease of use. Since we found differences in perceived ease of use only between the two agent conditions, the mediation analysis compared these conditions. In addition, we also included the agent likeability judgment as a potential mediator. That is, we could not ignore that the difference found in participants' affective state towards the agent (likeability judgments) might have influenced their perceptions of the system's ease of use.
A regression analysis was conducted, using dummy coding -behavior modeling and instructional narration. The analysis was performed using the PROCESS custom dialog for SPSS, as developed by [24]. The results are reported in Figure 2. Below we provide a summary of the main findings. The analysis showed the type of the agent instructional method was a significant predictor of perceived ease of use, R²=3.6% (i.e., the c path in Figure 2), as well as of, both, specific computer self-efficacy, R²=6.9%, and agent's likeability judgments, R²=4.1% (i.e., a paths in Figure 2). In turn, participants' stronger specific computer selfefficacy beliefs and agent likeability judgments were found to associate with stronger perceptions of ease of use of the system (i.e., b paths in Figure 2). After the inclusion of the mediators, the effect of the type of agent instructional method on perceived ease of use became non-significant (i.e., c' path in Figure 2), indicating full mediation. Together the b paths and the c' path explained R²=31.5% of the variance in perceived ease of use.

Discussion
The current study investigated the influence of an artificial agent as a behavior model, as compared to two non-modeling instructional methods (with and without an on-screen agent), on users' computer self-efficacy and perceived ease of use of a system, in the context of a novel AT training.
The results of the current study supported our first hypothesis, showing that participants in the agent-delivered modeling condition reported higher computer self-efficacy, as compared to participants in the two non-modeling conditions. This effect remained even when controlling for participants' general computer self-efficacy. We found no difference between participants' scores of self-efficacy in the two non-modeling conditions. This finding is in line with earlier research that showed the effect of behavior modeling (conducted by a human agent) on users' computer self-efficacy, as compared to other non-modeling methods (i.e., lecture training and self-manual) [14,16]. These findings indicate that an agent that models an observed behavior (rather than when merely explains such a behavior) can increase participants' beliefs about their own capabilities to use an AT.
Furthermore, we found that participants in the agent-delivered modeling condition had higher scores of perceived ease of use compared to participants in the agentdelivered instructional narration condition, both before and after controlling for their general computer self-efficacy. However, contrary to our hypothesis, no differences in perceived ease of use were found between participants in the agent-delivered modeling and the no-agent, text-only condition. Therefore, our second hypothesis was only partially supported. We argue that a possible explanation is that participants in the textonly instruction condition (as opposed to the two agent conditions) did not gather sufficient system-specific experience, because they were required to split their attention between the mutually referring written text and pictures, in order mentally integrate them (e.g., split attention effect) [25]. Thus, in accordance with the rationale of the TAM model, due to the lack of system-specific experience, these participants could have relied on their general positive beliefs about technologies when assessing the ease of use of the specific AT [26]. Future research could examine the general factors, other than general computer self-efficacy that could be used as anchoring beliefs in the absence of a system-specific experience.
In line with past literature [12], our mediation analysis showed that the difference we found in perceived ease of use between participants in the two agent conditions was explained by the differences in their specific computer self-efficacy. Additionally, we found agent likeability judgments to explain part of the difference in ease of use between participants in the two the agent conditions. Lastly, we found no difference in perceived ease of use between participants in the two non-modeling conditions. Overall, the results suggest that an agent that models an observed behavior (rather than when merely explains such a behavior) can enhance participants' beliefs about the easiness of an AT.
Although the effectiveness of the agent-delivered behavior modeling was not tested with physically disabled individuals for reasons related to practicality (i.e., transportation-related issues) and convenience (i.e., statistical power), we are convinced that the findings can also be generalized to this population. The study's results provide evidence that agent-delivered modeling can enhance individuals' computer self-efficacy and perceived ease of use of the system, as compared to other non-modeling methods, even after controlling for their general computer self-efficacy. Modeling has been shown to be a more effective instructional method for people with minimal prior system experience [16]. The fact that the study's participants had high general computer self-efficacy and extensive computer experience is an indicator that modeling could likewise be effective (if not more effective) for those with low general computer self-efficacy and/or minimal general computer experience.
Lastly, although the study's findings provide evidence that an artificial agent can be an effective behavioral model, the study's design does not allow to make inferences about the mere influence of the agent on individuals' beliefs. However, the fact that participants in the agent-delivered modeling condition perceived the agent as more likable than those in the agent-delivered instructional narration, provides some evidence that they agent modeling manipulation was both successful and effective. Future research should examine the mere effect of an artificial model on people's beliefs and other learning gains (i.e., performance), as also conditions of their use as models.
Overall, the current research revealed that an artificial agent, embedded in an AT, can serve as an effective behavior model, increasing individuals' computer self-efficacy, which also leads to higher perceptions of ease of use of this AT. Thus, this study adds to earlier work by providing evidence for the use of an artificial behavior modeling as a strategy to maximize AT adoption. Such findings are important for AT acceptance, as well as technology acceptance in general.