<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>User Interfaces: Five Strategies to Enhance Take-Over Quality in Automated Driving</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Patrick Ebel</string-name>
          <email>ebel@uni-leipzig.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Attentive User Interfaces</institution>
          ,
          <addr-line>Generative AI, LLMs, Difusion Models, Human-Computer Interaction, Auto-</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ScaDS.AI, Leipzig University</institution>
          ,
          <addr-line>Humboldtstraße 25, 04105 Leipzig</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As the automotive world moves toward higher levels of driving automation, Level 3 automated driving represents a critical juncture. In Level 3 driving, vehicles can drive alone under limited conditions, but drivers are expected to be ready to take over when the system requests. Assisting the driver to maintain an appropriate level of Situation Awareness (SA) in such contexts becomes a critical task. This position paper explores the potential of Attentive User Interfaces (AUIs) powered by generative Artificial Intelligence ( AI) to address this need. Rather than relying on overt notifications, we argue that AUIs based on novel AI technologies such as large language models or difusion models can be used to improve SA in an unconscious and subtle way without negative efects on drivers overall workload. Accordingly, we propose 5 strategies how generative AIs can be used to improve the quality of takeovers and, ultimately, road safety.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The advent of automated driving is changing the transportation landscape. The first cars
with Level 3 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] driving automation features are on public roads [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and many more will
follow. While the purely technical components are becoming more sophisticated, critical issues
regarding the interaction between humans and automation have yet to be resolved. Take-Over
Requests (TORs) emerge as a key component in this evolution. In Level 3 automated driving,
the automated driving features can drive the vehicle under limited conditions, and drivers are
relieved of the constant obligation to monitor the driving environment [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. They can play with
their mobile phones, interact with in-vehicle infotainment systems, or focus on conversations
with their passengers. In other words, drivers can become disengaged from the driving task
and the driving environment even though they must take over control once the car requests so.
This presents a unique challenge: when a TOR is initiated, a disengaged driver is thrust back
into a control role, often under conditions that require rapid comprehension and action.
https://ciao-group.github.io (P. Ebel)
      </p>
      <p>© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>
        Current research shows that engagement in non-driving activities, and thus loss of awareness
of the driving environment, can reduce the quality of driver takeovers [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Therefore, it is
crucial to redirect the driver’s attention to the road in a timely manner. While the question of how
to assist drivers in maintaining or restoring suficient SA has not been definitively answered [ 5],
research suggests that sudden warnings aimed at redirecting the driver’s attention often have
the unintended side efect of increasing workload [ 6]. This increase in workload and mental
stress can, in turn, lead to a decrease in take-over performance [7]. A seamless transition from
automated to manual driving is therefore essential.
      </p>
      <p>But how can the transition from a state in which the driver can be fully disengaged from
the driving task to a state in which the driver must be fully aware of the driving situation to
handle a potentially dangerous driving task be made subtly and smoothly? DeGuzman et al.
[8] point out that AUIs, that have been shown to efectively manage SA in manual driving,
can potentially also be beneficial for automated driving. Other recent work, for example by
Wintersberger et al. [9], underlines the potential of AUIs to improve take-over quality. In this
position paper we go a step further and argue that in particular the combination of AUIs and
generative AI technologies such as Large Language Models (LLMs) and Difusion Models (e.g.,
Stable Difusion [ 10] or DALLE-3 [11]) can help to subtly bring the driver back into the loop or
even subconsciously maintain the required level of SA. When fine-tuned with the rich sensor
data available in today’s cars, these models can generate a comprehensive picture of the driving
scenario and select guidance strategies tailored to the driving situation and the driver’s state.
Not only can they organically guide the driver back to control when the situation requires
immediate control, they can also subtly enhance the driver’s SA in situations of increasing
uncertainty, where it is not entirely clear whether a take-over will be issued. This prepares the
driver without appearing overly cautious.</p>
      <p>In the following we present five strategies that employ generative AI and in particular LLMs
and Difucion Models to serve as an inspiration for future research.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>In the following, we will give a brief overview of current research related to TORs in general
and the role that AUIs can play to improve TORs.</p>
      <sec id="sec-3-1">
        <title>2.1. Take-Over Requests in Automated Driving</title>
        <p>
          In Level 3 automated driving, the automated driving functions can drive the vehicle under
limited conditions [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In contrast to manual and assisted driving (L0-L2), the driver is relieved
of the constant need to monitor the driving environment. However, the driver is required to
be prepared to regain control in emergency situations, such as system failure or when the
upcoming driving situation is outside the operational design domain of the system [12]. In
these situations the automated driving systems triggers a TOR notifying the driver to take over
the driving task [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. For such transfers of control back to the driver two scenarios need to be
distinguished: “scheduled” TORs in situations in which the systems is aware of an upcoming TOR
(e.g., due to a highway exit or known road closure) and “imminent” TORs in sudden emergency
situations (e.g., a broken down car blocking the road) [9]. While the latter is considered to
be the most critical problem of Level 3 driving, it is unclear how often emergency TORs are
triggered [13, 14], and it is assumed that as technology evolves (e.g., sensor range,
Vehicleto-Everything (V2X) communication), their frequency may decrease and the frequency of
scheduled TORs will increase. Accordingly, it is important that drivers are able to regain control
and appropriate awareness of the driving situation such that they can handle the upcoming
driving task safely. Related work shows that the reaction time to TORs is an indicator for safety
and TOR quality [13, 15]. Studies on TOR quality further show that reaction time and driving
performance are influenced by the driving context (e.g., road curvature [ 16] or trafic [ 17]),
driver behavior (e.g., engagement in secondary tasks [
          <xref ref-type="bibr" rid="ref3">3, 18</xref>
          ], driver state (e.g., fatigue [19]), and
TOR modality (e.g., visual, vibrotactile, or auditory [20]).
        </p>
        <p>These findings highlight that for safe takeovers, a holistic understanding of the current
driving situation and the state of the driver is important to trigger context-dependent TORs.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Leveraging Attentive User Interface to Improve Take-Over Requests</title>
        <p>Attentive User Interfaces (AUIs) are “computing interfaces that are sensitive to the user’s
attention” [21]. These interfaces therefore adapt the type and amount of information displayed based
on the attentional state of the user and/or the attentional demands of the environment [8].
For example, due to the driver’s current high stress level and the complex driving situation,
an incoming call that’s predicted to be of low urgency, may not be immediately put through,
but rather suppressed until the driving situation allows it. Thus, AUIs can not only adjust the
timing (e.g., as proposed by Wintersberger et al. [22]) or the visual representation, but also
consider the costs and benefits of conflicting actions by taking into account the driver’s state
and the driving situation [23].</p>
        <p>DeGuzman et al. [8] suggest that AUIs, that have been shown to efectively manage SA in
manual driving, may be also beneficial in automated driving. The authors identify several
strategies for adapting UIs to either optimize attentional demand or to redirect the driver’s
attention to the road. However, they argue that only little research exists that studies the
efect of AUIs in automated driving. One of the few studies that show the potential of AUIs for
automated driving is presented by Wintersberger et al. [9] who argue that AUIs can improve
take-over behavior. Their results show that AUIs improve driving performance, reduce the
stress induced to drivers, and reduce the variance in the response times of scheduled TORs.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. How Generative AI can Enhance TOR Quality</title>
      <p>To efectively tailor the interventions to the driving situation and the driver’s state, an intelligent
TOR agent needs access to the driving automation features, the car sensors (e.g., cameras and
radar sensors, the cabin cameras) and access to the in-vehicle Human-Machine Interfaces
(HMIs) (e.g., infotainment system or head-up display). This information is already available
in some modern production cars as shown in the works by Ebel et al. [24, 25]. To personalize
interventions, it is also necessary to access personal driver information such as calendar entries.
We assume that this information is available by connecting the smartphone to the In-Vehicle
Information System (IVIS). Below we present 5 ideas, on how TOR assistants can benefit from
generative AI.</p>
      <p>Interactive Scenarios Dynamic visual representations of scheduled TORs can improve
the usability of TOR assistants [26]. Whereas current research focuses on relatively simple
visualizations that are primarily focused on the timing or priority of the TOR, we propose to
use generative models such as DALL-E 31 to generate dynamic scenarios that represent the
upcoming driving situation. These scenarios can be displayed on the center stack screen as
shown in Figure 12, on the head-up display, or on the dashboard. For example, when approaching
a highway exit, an image or video sequence of the exit can be displayed, prompting the driver
to make a decision. While these scenarios can be used in combination with a direct prompt,
they can also be used to subtly prime the driver for an upcoming TOR by displaying dynamic
content on the screen in the periphery of the driver’s focus.</p>
      <p>Conversational Primers Research suggests that conversational voice assistants and priming
techniques can help to build appropriate SA and improve TOR quality [27, 28, 16]. We argue
that LLMs can further increase this potential as the system can engage the driver in natural
1https://openai.com/dall-e-3
2Some elements were generated using Adobe Illustrator’s ”Text to Vector Graphic” feature: https://www.adobe.com/
products/illustrator/text-to-vector-graphic.html
but brief situation-pendent conversations about the upcoming route or driving scenario. For
example, a question such as “Looks like we’re getting of the highway in 10 minutes. Have you
driven this route before?” not only informs the driver of the upcoming TOR, but also indirectly
prompts the driver to look at the road, thereby improving SA. This strategy can also be useful
in situations where the system is uncertain whether a TOR will be triggered in the near future,
as the driver may not even realize that the goal of the conversation was to redirect his attention
to the road. This way, drivers won’t be annoyed by false positives because they won’t recognize
them as such.</p>
      <p>Context-Aware and Personalized TORs LLMs can provide concise, contextual descriptions
or advice based on real-time sensor data. This information can be used, for example, to generate
situation-based TORs: “We are approaching a construction zone on the right lane with a speed limit
of 50 km/h, please take control”. While current research suggests that context-aware warnings
can lead to safer takeovers [29], these approaches can only detect predefined situations and
are therefore limited to specific situations. By leveraging the vast amount of data generated by
LLMs and object detection algorithms, TORs are no longer limited to these predefined degrees
of freedom. Based on data from the cabin camera, TORs can be tailored not only to the driving
situation, but also to the driver’s state and current activity. The intelligent TOR assistant could
tell the driver to put away the phone or tablet, arguing that there will be enough time after the
construction zone to finish the current activity.</p>
      <p>Subtle Nudges Nudging and persuasion can influence drivers to drive more economically [ 30]
and more safely [31]. We argue that generative AI technology can be used to generate efective
persuasion strategies for TORs. Based on the driver’s past behavior and responses, the
generative AI can create tailored priming interventions or use the information gathered from past
conversations to persuade the driver to be more aware or take over earlier. For example, the
assistant might mention the driver’s daughter’s soccer game to subtly appeal to the driver’s
sense of responsibility not to get too distracted.</p>
      <p>Ambient Scene Generation Ambient displays and audio cues are an efective measure
to improve TOR quality [32, 16]. While current approaches are more or less explicit, we
propose that based on the current or upcoming driving situations, an intelligent agent can
generate situation-specific ambient scenes. For example, it could subtly change the tone of
the infotainment system, or generate soft ambient sounds that resemble the road or trafic to
subconsciously focus the driver’s attention on the driving environment. The same applies for
ambient lighting. The assistant could gradually synchronize the car’s interior lighting with the
outside environment and trafic scene. Dynamic lightning patterns based on passing cars or
upcoming situations can be generated and visualized using ambient light technology. A slight
change in brightness or hue can alert the driver’s senses without the driver being aware of the
change.</p>
      <p>Cabin Sensors
Vehicle Sensors</p>
      <p>Map</p>
      <p>V2X</p>
      <p>Digital
Footprint
Interaction</p>
      <p>Behavior</p>
      <p>Driver State</p>
      <p>Estimation
Driving Scene
Understanding</p>
      <p>TOR
Generator
Digital
Persona</p>
      <p>Conversation</p>
      <p>Agent
Scenario
Generator</p>
      <p>Output
IVIS Displays
Ambient Light
Audio System
Tactile Interfaces</p>
    </sec>
    <sec id="sec-5">
      <title>4. Proposed System Architecture</title>
    </sec>
    <sec id="sec-6">
      <title>5. Discussion and Conclusion</title>
      <p>We argue that key advantage of using generative AI for scheduled TORs is subtlety and
persuasion. The interactions should be smooth, non-intrusive, and feel natural so that the driver’s SA
is maintained without the driver actively realizing that they’re being assisted. The goal is not
to make the driver dependent on the Intelligent TOR Assistant, but to use the new
opportunities that generative AI methods provide to enhance the collaboration between driver and the
automated driving system. While subtle cues can help drivers to maintain an appropriate level
of SA, LLMs can also be used to generate eloquent and meaningful prompts that persuade the
driver to be more attentive. Incorporating personal and situational information could not only
improve in-situ TOR quality, but also change driver behavior in the long run.</p>
      <p>For all of the strategies presented in this position paper, it is important to emphasize that
TORs are safety-critical. Choosing an inappropriate modality or providing false or inaccurate
information can have fatal consequences. This needs to be considered in future work, especially
in light of current vulnerabilities of generative models such as hallucination, bias, and lack of
explainability. In addition, the question of how to ensure that approaches using generative AI
methods comply with regulations needs to be answered. Due to their non-deterministic nature,
they can’t be evaluated against standardized datasets to assess whether they are “good enough”
to be used for safety critical applications3.</p>
      <p>While some of the above strategies may seem dystopian at the time of this writing, a digital
assistant that is intimately aware of user preferences and behaviors and can carry on a
conversation as naturally as a human counterpart may be technically possible and socially acceptable
in just a few years. However, research suggests that conversational agents that seem too human
don’t necessarily drive adoption. In fact, they may deter people from using the technology [33].
Thus, implementing strategies such as the Subtle Nudges strategy is a challenging endeavor and
more research is needed to enable systems such as the one presented in this position paper.
3Not to say that the question of what is “good enough” when it comes to automated driving has been answered yet.
[5] P. Marti, C. Jallais, A. Koustanaï, A. Guillaume, F. Mars, Impact of the driver’s visual
engagement on situation awareness and takeover quality, Transportation Research Part F:
Trafic Psychology and Behaviour 87 (2022) 391–402. doi: 10.1016/j.trf.2022.04.018.
[6] S. Ma, W. Zhang, Z. Yang, C. Kang, C. Wu, C. Chai, J. Shi, Y. Zeng, H. Li, Take over Gradually
in Conditional Automated Driving: The Efect of Two-stage Warning Systems on Situation
Awareness, Driving Stress, Takeover Performance, and Acceptance, International Journal of
Human–Computer Interaction 37 (2021) 352–362. doi:10.1080/10447318.2020.1860514.
[7] S. Agrawal, S. Peeta, Evaluating the impacts of situational awareness and mental stress
on takeover performance under conditional automation, Transportation Research Part F:
Trafic Psychology and Behaviour 83 (2021) 210–225. doi: 10.1016/j.trf.2021.10.002.
[8] C. A. DeGuzman, D. Kanaan, B. Donmez, Attentive User Interfaces: Adaptive Interfaces
that Monitor and Manage Driver Attention, in: A. Riener, M. Jeon, I. Alvarez (Eds.), User
Experience Design in the Era of Automated Driving, volume 980, Springer International
Publishing, Cham, 2022, pp. 305–334. doi:10.1007/978-3-030-77726-5_12.
[9] P. Wintersberger, C. Schartmüller, A. Riener, Attentive User Interfaces to Improve
Multitasking and Take-Over Performance in Automated Driving: The Auto-Net of
Things, International Journal of Mobile Human Computer Interaction 11 (2019) 40–58.
doi:10.4018/IJMHCI.2019070103.
[10] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, High-Resolution Image Synthesis
with Latent Difusion Models, in: 2022 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), IEEE, New Orleans, LA, USA, 2022, pp. 10674–10685. doi:10.
1109/CVPR52688.2022.01042.
[11] J. Betker, G. Goh, L. Jing, †. TimBrooks, J. Wang, L. Li, †. LongOuyang, †. JuntangZhuang,
†. JoyceLee, †. YufeiGuo, †. WesamManassra, †. PrafullaDhariwal, †. CaseyChu, †.
YunxinJiao, A. Ramesh, Improving image generation with better captions, 2023.
[12] W. Morales-Alvarez, O. Sipele, R. Léberon, H. H. Tadjine, C. Olaverri-Monreal,
Automated Driving: A Literature Review of the Take over Request in Conditional Automation,
Electronics 9 (2020) 2087. doi:10.3390/electronics9122087.
[13] P. Wintersberger, P. Green, A. Riener, Am I Driving or Are You or Are We Both? A
Taxonomy for Handover and Handback in Automated Driving, in: Proceedings of the
9th International Driving Symposium on Human Factors in Driver Assessment, Training,
and Vehicle Design: Driving Assessment 2017, University of Iowa, Manchester Village,
Vermont, USA, 2017, pp. 333–339. doi:10.17077/drivingassessment.1655.
[14] A. Eriksson, N. A. Stanton, Takeover Time in Highly Automated Vehicles: Noncritical
Transitions to and From Manual Control, Human Factors: The Journal of the Human
Factors and Ergonomics Society 59 (2017) 689–705. doi:10.1177/0018720816685832.
[15] R. McCall, F. McGee, A. Mirnig, A. Meschtscherjakov, N. Louveton, T. Engel, M. Tscheligi,
A taxonomy of autonomous vehicle handover situations, Transportation Research Part A:
Policy and Practice 124 (2019) 507–522. doi:10.1016/j.tra.2018.05.005.
[16] S. Sadeghian Borojeni, L. Weber, W. Heuten, S. Boll, From reading to driving: Priming
mobile users for take-over situations in highly automated driving, in: Proceedings of the
20th International Conference on Human-Computer Interaction with Mobile Devices and
Services, ACM, Barcelona Spain, 2018, pp. 1–12. doi:10.1145/3229434.3229464.
[17] J. Radlmayr, C. Gold, L. Lorenz, M. Farid, K. Bengler, How Trafic Situations and
NonDriving Related Tasks Afect the Take-Over Quality in Highly Automated Driving,
Proceedings of the Human Factors and Ergonomics Society Annual Meeting 58 (2014) 2063–2067.
doi:10.1177/1541931214581434.
[18] C. Gold, D. Damböck, L. Lorenz, K. Bengler, “Take over!” How long does it take to get the
driver back into the loop?, Proceedings of the Human Factors and Ergonomics Society
Annual Meeting 57 (2013) 1938–1942. doi:10.1177/1541931213571433.
[19] A. Feldhutter, A. Ruhl, A. Feierle, K. Bengler, The Efect of Fatigue on Take-over
Performance in Urgent Situations in Conditionally Automated Driving, in: 2019 IEEE
Intelligent Transportation Systems Conference (ITSC), IEEE, Auckland, New Zealand, 2019, pp.
1889–1894. doi:10.1109/ITSC.2019.8917183.
[20] S. H. Yoon, Y. W. Kim, Y. G. Ji, The efects of takeover request modalities on highly
automated car control transitions, Accident Analysis &amp; Prevention 123 (2019) 150–158.
doi:10.1016/j.aap.2018.11.018.
[21] R. Vertegaal, Attentive User Interfaces, Communications of the ACM 46 (2003) 30–33.</p>
      <p>doi:10.1145/636772.636794.
[22] P. Wintersberger, A. Riener, C. Schartmüller, A.-K. Frison, K. Weigl, Let Me Finish before I
Take Over: Towards Attention Aware Device Integration in Highly Automated Vehicles,
in: Proceedings of the 10th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications, ACM, Toronto ON Canada, 2018, pp. 53–65. doi:10.
1145/3239060.3239085.
[23] M. Braun, F. Weber, F. Alt, Afective Automotive User Interfaces–Reviewing the State of
Driver Afect Research and Emotion Regulation in the Car, ACM Computing Surveys 54
(2022) 1–26. doi:10.1145/3460938.
[24] P. Ebel, C. Lingenfelder, A. Vogelsang, On the forces of driver distraction: Explainable
predictions for the visual demand of in-vehicle touchscreen interactions, Accident Analysis
&amp; Prevention 183 (2023) 106956. doi:10.1016/j.aap.2023.106956.
[25] P. Ebel, K. J. Gülle, C. Lingenfelder, A. Vogelsang, Exploring Millions of User Interactions
with ICEBOAT: Big Data Analytics for Automotive User Interfaces, in: AutomotiveUI ’23:
15th International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, ACM, Ingolstadt, Germany, 2023. doi:10.48550/arXiv.2307.06089.
[26] K. Holländer, B. Pfleging, Preparing Drivers for Planned Control Transitions in Automated
Cars, in: Proceedings of the 17th International Conference on Mobile and Ubiquitous
Multimedia, ACM, Cairo Egypt, 2018, pp. 83–92. doi:10.1145/3282894.3282928.
[27] K. Mahajan, D. R. Large, G. Burnett, N. R. Velaga, Exploring the benefits of conversing
with a digital voice assistant during automated driving: A parametric duration model of
takeover time, Transportation Research Part F: Trafic Psychology and Behaviour 80 (2021)
104–126. doi:10.1016/j.trf.2021.03.012.
[28] X. Bai, J. Feng, Unlocking Safer Driving: How Answering Questions Help Takeovers in
Partially Automated Driving, Proceedings of the Human Factors and Ergonomics Society
Annual Meeting (2023) 21695067231192202. doi:10.1177/21695067231192202.
[29] E. Pakdamanian, E. Hu, S. Sheng, S. Kraus, S. Heo, L. Feng, Enjoy the Ride Consciously with
CAWA: Context-Aware Advisory Warnings for Automated Driving, in: Proceedings of the
14th International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, ACM, Seoul Republic of Korea, 2022, pp. 75–85. doi:10.1145/3543174.3546835.
[30] A. Meschtscherjakov, D. Wilfinger, T. Scherndl, M. Tscheligi, Acceptance of future
persuasive in-car interfaces towards a more economic driving behaviour, in: Proceedings of
the 1st International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, ACM, Essen Germany, 2009, pp. 81–88. doi:10.1145/1620509.1620526.
[31] V. Choudhary, M. Shunko, S. Netessine, S. Koo, Nudging Drivers to Safety: Evidence from
a Field Experiment, Management Science 68 (2022) 4196–4214. doi:10.1287/mnsc.2021.
4063.
[32] S. Sadeghian Borojeni, L. Chuang, W. Heuten, S. Boll, Assisting Drivers with Ambient
Take-Over Requests in Highly Automated Driving, in: Proceedings of the 8th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM,
Ann Arbor MI USA, 2016, pp. 237–244. doi:10.1145/3003715.3005409.
[33] T. Fernandes, E. Oliveira, Understanding consumers’ acceptance of automated technologies
in service encounters: Drivers of digital voice assistants adoption, Journal of Business
Research 122 (2021) 180–191. doi:10.1016/j.jbusres.2020.08.058.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] SAEJ3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, Standard, Society of Automotive Engineers (SAE)</article-title>
          ,
          <year>Warrendale</year>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Mercedes-Benz</surname>
          </string-name>
          ,
          <article-title>Conditionally automated driving: First internationally valid system approval</article-title>
          , https://group.mercedes
          <article-title>-benz.com/innovation/product-innovation/autonomousdriving/system-approval-for-conditionally-automated-driving</article-title>
          .html,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. D.</given-names>
            <surname>McDonald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alambeigi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Engström</surname>
          </string-name>
          , G. Markkula,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vogelpohl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dunne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yuma</surname>
          </string-name>
          ,
          <article-title>Toward Computational Simulations of Behavior During Automated Driving Takeovers: A Review of the Empirical and Modeling Literatures</article-title>
          , Human Factors:
          <source>The Journal of the Human Factors and Ergonomics Society</source>
          <volume>61</volume>
          (
          <year>2019</year>
          )
          <fpage>642</fpage>
          -
          <lpage>688</lpage>
          . doi:
          <volume>10</volume>
          .1177/0018720819829572.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Vogelpohl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kühn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hummel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gehlert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vollrath</surname>
          </string-name>
          ,
          <article-title>Transitioning to manual driving requires additional time after automation deactivation</article-title>
          ,
          <source>Transportation Research Part F: Trafic Psychology and Behaviour</source>
          <volume>55</volume>
          (
          <year>2018</year>
          )
          <fpage>464</fpage>
          -
          <lpage>482</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.trf.
          <year>2018</year>
          .
          <volume>03</volume>
          .019.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>