=Paper= {{Paper |id=Vol-3154/short7 |storemode=property |title=Simulated Extreme Experiential Training for Engaging with Automation |pdfUrl=https://ceur-ws.org/Vol-3154/short7.pdf |volume=Vol-3154 |authors=David Miller |dblpUrl=https://dblp.org/rec/conf/chi/Miller22 }} ==Simulated Extreme Experiential Training for Engaging with Automation== https://ceur-ws.org/Vol-3154/short7.pdf
Simulated Extreme Experiential Training for Engaging with
Automation
Dave B Miller1
1
    University of Central Florida, 4000 Central Florida Blvd, Orlando, Florida, USA

                 Abstract
                 As Advanced Driver Assistance Systems (ADAS) become more widespread and capable,
                 enhancing road vehicle safety and performance, drivers will increasingly encounter situations
                 where system abilities are unclear, human/machine roles are not well defined, and human
                 performance may demand augmentation. How might experiential training help drivers to learn
                 the capabilities and limits of an unfamiliar ADAS, and what level of intensity is required for
                 effective training transfer? In the manner that trust-building exercises (like the ‘trust fall’ from
                 a picnic table into teammates’ arms) calibrate human-human trust, can experiential training in
                 ‘edge-case’ and extreme driving conditions scenarios be delivered through simulation to
                 drivers, providing training on an unfamiliar ADAS, such that they build an accurate mental
                 model of their vehicle’s capabilities and limits, and of their own?
                 Keywords 1
                 training, simulation, VR, experiential training

1. Introduction

    Automated, autonomous, and artificially-intelligent (A3) systems [14] are increasingly being
integrated into road vehicles sold to members of the public [4]. Without an understanding of the
behavior of the A3 systems in their vehicle, drivers may suffer an “automation surprise” [13] and adverse
automated system behavior [5], either due to the failure of the system, or as a result of being unprepared
for system actions—irrespective of whether those actions are correct or incorrect responses to
environmental conditions or driver behavior. Training can provide drivers with a more accurate mental
model of the system [9,10,15] and its responses to various road conditions, and thus allow drivers to
better expect its behavior and to work with the system rather than against it. As was noted by Farmer et
al. [2], drivers who were unfamiliar with antilock braking systems did not properly take advantage of
the safety benefits afforded, and in some cases inadvertently defeated the system when surprised by its
behavior, causing crashes or increasing their severity. More advanced systems that can do far more than
mitigate a skid or brake in response to an imminent forward collision enhance safety and driving
performance–but only if drivers appropriately trust the vehicle’s automated systems and their own
driving abilities. Misuse and abuse of automated systems [11], and poor correspondence between the
driver’s mental model of the system and the reality of its limits could increase crashes, especially in
situations of human-system conflict. If drivers are surprised by the actions of the system, they may
wrestle with it, even if the system is acting correctly, increasing risks; likewise, they need to know when
the system will fail or err and thus when to act against it. Experiential training can ameliorate this
problem by teaching drivers when to trust the system, and when to fight it [7].
    Training using simulation has a long history in aviation [6], and flight training often includes
exposure to both routine and emergency situations. Providing drivers firsthand experience with how
their vehicle will behave in extreme situations, such as on an icy road surface or a near-collision
situation, how automated features will enhance their driving capabilities, and how automation will act
to protect the driver and others in rare situations (edge cases), should improve correspondence between
the driver’s mental model of the system and reality–and allow the driver to better use the system to
enhance safety. And such experiences can also provide the driver insight into how they will behave in

AutomationXP22: Engaging with Automation, CHI'22, April 30, 2022, New Orleans, LA


              © 2022 Copyright for this paper by its authors.
              Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)
such situations and what their own capabilities and limits are. Driving simulation is now both realistic
enough and inexpensive enough to be deployed widely to both university research labs and to locales
such as automobile dealerships, enabling drivers to experience otherwise rare and dangerous situations
with perfect safety, and with computer-based coaching available, see Figure 1.




Figure 1: A high-presence driving simulator can be used to provide a training experience, or can be
used to test the effects of experiential training.

2. Training Drivers with Simulated Extreme and Edge-Case Driving Situations

    To examine the efficacy of training drivers using virtual reality simulation, and to assess the benefit
of providing drivers with experiential training including extreme and edge-case situations, a two-stage
study can be employed with a training phase followed by a testing phase. The initial training experience,
which includes either only routine situations or a combination of routine and extreme situations will be
followed by a testing experience that includes both routine and extreme situations, and to study training
transfer, situations that have been encountered in training as well as novel situations similar but not
identical to those seen in the training, see Table 1 and Figure 2.

Table 1
Experimental Conditions
                                             Training Stage                       Testing Stage
                                        Explicit training in ADAS         Assessment of driving with
    Routine Situation Training         capabilities under normal         ADAS in normal and extreme
                                               conditions.                        situations
                                        Explicit training in ADAS         Assessment of driving with
  Routine + Extreme Situation
                                     capabilities under normal and       ADAS in normal and extreme
            Training
                                          extreme situations.                     situations

   Following the training stage, participants will be provided with a questionnaire interrogating their
understanding of the ADAS capabilities and limits, and their expectations of what it will do in situations
they encountered in the training, and in novel situations to gauge their understanding of how the ADAS
will function in situations which they did not encounter. This questionnaire set will also evaluate the
participants’ self-reported trust in the ADAS, and this reported trust will be compared with their driving
behavior in the simulated driving experiences–where they may demonstrate trust or distrust of the
system through reliance behaviors. Participants’ mental model of the system, and that model’s
correspondence with the actual capabilities and limits of the system will be assessed through a mental
model evaluation exercise, similar to the one developed by Rozenblit and Keil [12]. This technique
combines interview and survey techniques to assess the depth of understanding on the part of the
participant and how well that understanding matches the actual system design. Evaluation of driver
behavior demonstrating reliance has been used previously in simulation research [3,8], and the research
proposed will break new ground by further researching the interaction of trust and reliance.
   We hypothesize that training which exposes participants to a larger range of sample situations,
specifically extreme conditions driving or edge-case situations, will help participants build a more
accurate mental model of the system’s capabilities and limits, and better calibrate their trust in the
system.
   •    Hypothesis 1: Extreme situation and edge-case training will better calibrate trust in the ADAS,
   compared with training that only includes exposure to routine situations.
   •    Hypothesis 2: Extreme situation and edge case training will allow participants to build a more
   accurate mental model of the ADAS, compared with only routine situation training.

    Participants will return for a second simulated driving experience after a delay of approximately one
month. This delay allows for forgetting to occur, which allows for an assessment of long-term retention
of what is learned in the training phase [1]. In the testing phase, all participants will be exposed to both
routine and extreme driving situations. Driver behavior in these situations can thus be compared
between participants who received both routine and extreme situation training and those who received
only the routine situation training. Following the testing phase simulation experience, participants will
again be provided with the questionnaire battery to assess their mental model of the ADAS and their
self-reported trust in the system.
    •     Hypothesis 3: Extreme situation and edge case training in the training phase will help
    participants to perform better in extreme and edge case situations in the testing phase.
    •     Hypothesis 4: Extreme situation and edge case training in the training phase will help
    participants to perform better in routine situations encountered in the testing phase, due to greater
    training transfer.

   The challenges presented will be comprised of varying weather (clear weather, fog, snow) and road
conditions (dry road, wet road, ice), and a set of obstacles including pedestrians, bicyclists, animals,
other vehicles, and obstructions such as rocks or debris. In addition, the training set will provide
examples of the failure modes of the ADAS: both false positive and miss situations, and examples
where the system acts to protect the driver from unseen hazards or threats that move faster than the
driver can react–taking advantage of the superhuman abilities of technological systems. These
challenges will provide a way to assess the participant driver’s ability to respond to situations they have
seen before, and those they have not, and thus assess training transfer to novel situations.
                                                Routine      Routine           Routine            Routine          Routine
                                                Situation    Situation         Situation          Situation        Situation
                                                    A            B                 C                  E                F
    ROUTINE SITUATIONS
                                  Acclimation
        TRAINING
                                                        System           System          Routine          System
                                                         Save             Save           Situation          Save
                                                          A                B                 D               C




                                                Routine          Extreme         Routine          Routine          Extreme
                                                Situation        Situation        Situation        Situation        Situation
                                                    A                A                B                C                C
ROUTINE + EXTREME SITUATIONS
                                  Acclimation
          TRAINING
                                                        System           System              System          System
                                                         Save            Failure              Save            Failure
                                                          A                A                   B                 B




                                                Routine          Extreme          System         Routine          Extreme
                                                Situation        Situation          Save           Situation        Situation
                                                    A                A               X                 Y                X
ROUTINE + EXTREME SITUATIONS
                                  Acclimation
           TESTING
                                                        System           System           Routine            System
                                                         Save            Failure          Situation           Failure
                                                          A                A                  X                  X

Figure 2: Outline of the training and testing event sequences as participants will experience them in
simulation. Participants will be randomly assigned to experience either the routine or routine +
extreme training program; all participants will subsequently experience the routine + extreme
situations testing sequence. Situations presented in the training experiences are labeled A, B, C, etc.
and situations presented to test transfer of training to novel situations are labeled X, Y, Z.

2.1.    Aims of Investigating Extreme Experiential Training in Simulation

    Studying training transfer from a lower fidelity and lower presence driving simulation (e.g. a limited
field of view simulator or HMD-based simulation) to a higher fidelity driving simulation (as an
analogue for the real road environment) will enhance the knowledge of how VR can be used for training
members of the public. Where in the past, high-presence simulators have been quite expensive and
limited in availability, the proliferation of VR and high-fidelity driving interface hardware makes it
possible to deliver training in locales such as automotive dealerships, rental agencies, driving schools,
and even in home environments. By investigating how training using VR persists over time and how
learning can be generalized will have great value to other training-related areas in industry, as well as
in the driving sphere [1,10]. Understanding how training on a limited set of examples transfers to similar
but novel situations will expand the state of knowledge in education, training, HCI, and VR studies, in
addition to enhancing driver safety.
    As computer and robotic technologies have proliferated even in just the past few years into every
area of modern life, the understanding of how people, especially non-specialists without an engineering
or computing background, relate to technology needs to be updated. Measures of trust in technology,
for example the inventory developed by Jian, Bisantz, and Drury and published in 2001 [9], may not
fully encompass the relevant theoretical constructs, as noted by Hancock et al. [7]. To update the tools
for measuring self-reported trust, we aim to develop more modern measures of trust in technology and
technological agents specific to interaction with agentic systems [15] such as partially-automated
vehicles. These survey measures of trust will be linked with behavioral measures of reliance, a related
but not identical construct [2] through triangulation of methods. Reliance, which has been explored by
Lee and See [14], Miller et al. [16], and Fu et al. [4], relates to behavior, rather than the cognitive or
theoretical construct of trust, and to this end it is necessary to measure, though driver behavior, the
actions that demonstrate their level of trust in the system–and whether that trust and reliance is well
founded, or is itself erroneous. This model of trust and reliance is likely constructed from a combination
of trait factors and experience, as shown in Figure 3.

Trait Propensity to Trust                                                             Self-Reported Trust
Prior Experience                       MENTAL MODEL                                  Behavioral Reliance
                                        OF SYSTEM
Explicit Training                                                          Description of System Model
Figure 3: Trait propensity to trust, prior experience, and training influence the mental model one holds
of a system. This mental model interacts determines self-reported trust as well as reliance behavior,
and underpins their description of the system when queried.

    Study of mental models of technology and how these mental models influence interaction behavior
are even now more limited than the understanding of trust and reliance. While there has been research
conducted on how training influences the development of mental models and their accuracy [13], and
techniques have been investigated for studying correspondence between a user’s mental model and that
held by the designers of the technology [11,12,21], the area of mental models research deserves further
attention. While distortion is inevitable in any research of this kind as it may be impossible to fully
articulate one’s mental model [20], and therefore comparing a user’s and designer’s models injects
noise on both sides, it is still likely a valuable endeavor to investigate correspondence between mental
models, and to assess the effectiveness of experiential training, in addition to other types of instruction,
on increasing congruence of users’ mental models with the reality of the system.
    Transfer of learning from the training phase to the testing phase will be assessed through
measurement of driving behavior in the testing phase, where participants will be presented with both
familiar and novel routine and extreme situations. It is hypothesized that participants who only
experienced the routine situations in the training stage will have greater difficulty in the extreme
situations presented in the second stage, compared with participants who received training including
extreme situations in the training phase. Extreme situation training may improve performance in novel
routine situations, as well as in extreme situations, as a result of greater experience in interacting with
the ADAS and driving in challenging conditions.

3. Conclusions

    This proposed course of study, exploring the value of experiential training for drivers encountering
novel vehicle systems, will almost surely result in an expansion of the understanding of training transfer
generalizable to other areas of human-technology interaction. Development of updated measures for
trust and mechanisms for assessment of mental models, necessary for this endeavor will provide further
benefits to researchers as well. Ultimately, this program of research will hopefully yield safety benefits
to the driving public through the deployment of experiential training, provided the benefits prove
material as forecast. If this study of training using VR to provide exposure to extreme and edge-cases
in driving yields the anticipated results, and this type of training can be made widely available, driving
safety may be substantially improved by giving drivers a way to learn about their vehicle’s capabilities
and limits, as well as to understand their own envelope of own abilities and limitations, and thus behave
accordingly. Through this training they can better calibrate their trust in technological driving aids and
in themselves, and as a result drive more safely, reducing the terrible toll of road incidents.

4. Acknowledgements

   I would like to thank Ben D. Sawyer, James Intriligator, and Nathan Ward for their contributions to
developing this research program.
5. References

1. Gianclaudio Casutt, Nathan Theill, Mike Martin, Martin Keller, and Lutz Jäncke. 2014. The drive-
    wise project: driving simulator training increases real driving performance in healthy older drivers.
    Frontiers      in    Aging     Neuroscience     6.    Retrieved      February   8,    2022      from
    https://www.frontiersin.org/article/10.3389/fnagi.2014.00085
2. Charles M. Farmer, Adrian K. Lund, Rebecca E. Trempel, and Elisa R. Braver. 1997. Fatal crashes
    of passenger vehicles before and after adding antilock braking systems. Accident Analysis &
    Prevention 29, 6: 745–757. https://doi.org/10/d483x2
3. Ernestine Fu, Srinath Sibi, David Miller, Mishel Johns, Brian Mok, Martin Fischer, and David
    Sirkin. 2019. The Car That Cried Wolf: Driver Responses to Missing, Perfectly Performing, and
    Oversensitive Collision Avoidance Systems. In 2019 IEEE Intelligent Vehicles Symposium (IV),
    1830–1836. https://doi.org/10.1109/IVS.2019.8814190
4. P. A. Hancock, Illah Nourbakhsh, and Jack Stewart. 2019. On the future of transportation in an era
    of automated and autonomous vehicles. Proceedings of the National Academy of Sciences 116, 16:
    7684–7691. https://doi.org/10.1073/pnas.1805770115
5. P.A. Hancock. 2021. Avoiding adverse autonomous agent actions. Human–Computer Interaction
    0, 0: 1–26. https://doi.org/10.1080/07370024.2021.1970556
6. Chihyung Jeon. 2015. The Virtual Flier: The Link Trainer, Flight Simulation, and Pilot Identity.
    Technology and Culture 56, 1: 28–53. https://doi.org/10.1353/tech.2015.0017
7. John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance.
    Human Factors: The Journal of the Human Factors and Ergonomics Society 46, 1: 50–80.
    https://doi.org/10/dr6jf9
8. David Miller, Mishel Johns, Brian Mok, Nikhil Gowda, David Sirkin, Key Lee, and Wendy Ju.
    2016. Behavioral Measurement of Trust in Automation The Trust Fall. Proceedings of the Human
    Factors and Ergonomics Society Annual Meeting 60, 1: 1849–1853. https://doi.org/10/gfkv84
9. Neville Moray. 1990. Designing for transportation safety in the light of perception, attention, and
    mental models. Ergonomics 33, 10–11: 1201–1213. https://doi.org/10/fmssk2
10. Donald A. Norman. 1983. Some observations on mental models. Mental models 7, 112: 7–14.
11. Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse.
    Human Factors: The Journal of the Human Factors and Ergonomics Society 39, 2: 230–253.
    https://doi.org/10.1518/001872097778543886
12. Leonid Rozenblit and Frank Keil. 2002. The misunderstood limits of folk science: an illusion of
    explanatory          depth.        Cognitive          Science         26,       5:         521–562.
    https://doi.org/10.1207/s15516709cog2605_1
13. N B Sarter, D D Woods, and C E Billings. 1997. Automation Surprises. In Handbook of Human
    Factors and Ergonomics, Gavriel Salvendy (ed.). 1926–1943.
14. Ben D. Sawyer, Dave B. Miller, Matthew Canham, and Waldemar Karwowski. 2021. Human
    Factors and Ergonomics in Design of a 3: Automation, Autonomy, and Artificial In℡ligence. In
    Handbook of Human Factors and Ergonomics (5th Edition). John Wiley & Sons, Ltd, 1385–1416.
    https://doi.org/10.1002/9781119636113.ch52
15. Nancy Staggers and A.F. Norcio. 1993. Mental models: concepts for human-computer interaction
    research.     International    Journal     of    Man-Machine        Studies   38,    4:    587–605.
    https://doi.org/10.1006/imms.1993.1028