<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Generative AI and Attentive User Interfaces: Five Strategies to Enhance Take-Over Quality in Automated Driving</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Patrick</forename><surname>Ebel</surname></persName>
							<email>ebel@uni-leipzig.de</email>
							<affiliation key="aff0">
								<orgName type="department">ScaDS.AI</orgName>
								<orgName type="institution">Leipzig University</orgName>
								<address>
									<addrLine>Humboldtstraße 25</addrLine>
									<postCode>04105</postCode>
									<settlement>Leipzig</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Generative AI and Attentive User Interfaces: Five Strategies to Enhance Take-Over Quality in Automated Driving</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8CA862839F78D8CF59AD2757CDEAD663</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Attentive User Interfaces</term>
					<term>Generative AI</term>
					<term>LLMs</term>
					<term>Diffusion Models</term>
					<term>Human-Computer Interaction</term>
					<term>Automotive User Interfaces</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As the automotive world moves toward higher levels of driving automation, Level 3 automated driving represents a critical juncture. In Level 3 driving, vehicles can drive alone under limited conditions, but drivers are expected to be ready to take over when the system requests. Assisting the driver to maintain an appropriate level of Situation Awareness (SA) in such contexts becomes a critical task. This position paper explores the potential of Attentive User Interfaces (AUIs) powered by generative Artificial Intelligence (AI) to address this need. Rather than relying on overt notifications, we argue that AUIs based on novel AI technologies such as large language models or diffusion models can be used to improve SA in an unconscious and subtle way without negative effects on drivers overall workload. Accordingly, we propose 5 strategies how generative AIs can be used to improve the quality of takeovers and, ultimately, road safety.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The advent of automated driving is changing the transportation landscape. The first cars with Level 3 <ref type="bibr" target="#b0">[1]</ref> driving automation features are on public roads <ref type="bibr" target="#b1">[2]</ref> and many more will follow. While the purely technical components are becoming more sophisticated, critical issues regarding the interaction between humans and automation have yet to be resolved. Take-Over Requests (TORs) emerge as a key component in this evolution. In Level 3 automated driving, the automated driving features can drive the vehicle under limited conditions, and drivers are relieved of the constant obligation to monitor the driving environment <ref type="bibr" target="#b0">[1]</ref>. They can play with their mobile phones, interact with in-vehicle infotainment systems, or focus on conversations with their passengers. In other words, drivers can become disengaged from the driving task and the driving environment even though they must take over control once the car requests so. This presents a unique challenge: when a TOR is initiated, a disengaged driver is thrust back into a control role, often under conditions that require rapid comprehension and action.</p><p>Current research shows that engagement in non-driving activities, and thus loss of awareness of the driving environment, can reduce the quality of driver takeovers <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. Therefore, it is crucial to redirect the driver's attention to the road in a timely manner. While the question of how to assist drivers in maintaining or restoring sufficient SA has not been definitively answered <ref type="bibr" target="#b4">[5]</ref>, research suggests that sudden warnings aimed at redirecting the driver's attention often have the unintended side effect of increasing workload <ref type="bibr" target="#b5">[6]</ref>. This increase in workload and mental stress can, in turn, lead to a decrease in take-over performance <ref type="bibr" target="#b6">[7]</ref>. A seamless transition from automated to manual driving is therefore essential.</p><p>But how can the transition from a state in which the driver can be fully disengaged from the driving task to a state in which the driver must be fully aware of the driving situation to handle a potentially dangerous driving task be made subtly and smoothly? DeGuzman et al. <ref type="bibr" target="#b7">[8]</ref> point out that AUIs, that have been shown to effectively manage SA in manual driving, can potentially also be beneficial for automated driving. Other recent work, for example by Wintersberger et al. <ref type="bibr" target="#b8">[9]</ref>, underlines the potential of AUIs to improve take-over quality. In this position paper we go a step further and argue that in particular the combination of AUIs and generative AI technologies such as Large Language Models (LLMs) and Diffusion Models (e.g., Stable Diffusion <ref type="bibr" target="#b9">[10]</ref> or DALLE-3 <ref type="bibr" target="#b10">[11]</ref>) can help to subtly bring the driver back into the loop or even subconsciously maintain the required level of SA. When fine-tuned with the rich sensor data available in today's cars, these models can generate a comprehensive picture of the driving scenario and select guidance strategies tailored to the driving situation and the driver's state. Not only can they organically guide the driver back to control when the situation requires immediate control, they can also subtly enhance the driver's SA in situations of increasing uncertainty, where it is not entirely clear whether a take-over will be issued. This prepares the driver without appearing overly cautious.</p><p>In the following we present five strategies that employ generative AI and in particular LLMs and Diffucion Models to serve as an inspiration for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>In the following, we will give a brief overview of current research related to TORs in general and the role that AUIs can play to improve TORs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Take-Over Requests in Automated Driving</head><p>In Level 3 automated driving, the automated driving functions can drive the vehicle under limited conditions <ref type="bibr" target="#b0">[1]</ref>. In contrast to manual and assisted driving (L0-L2), the driver is relieved of the constant need to monitor the driving environment. However, the driver is required to be prepared to regain control in emergency situations, such as system failure or when the upcoming driving situation is outside the operational design domain of the system <ref type="bibr" target="#b11">[12]</ref>. In these situations the automated driving systems triggers a TOR notifying the driver to take over the driving task <ref type="bibr" target="#b0">[1]</ref>. For such transfers of control back to the driver two scenarios need to be distinguished: "scheduled" TORs in situations in which the systems is aware of an upcoming TOR (e.g., due to a highway exit or known road closure) and "imminent" TORs in sudden emergency situations (e.g., a broken down car blocking the road) <ref type="bibr" target="#b8">[9]</ref>. While the latter is considered to be the most critical problem of Level 3 driving, it is unclear how often emergency TORs are triggered <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>, and it is assumed that as technology evolves (e.g., sensor range, Vehicleto-Everything (V2X) communication), their frequency may decrease and the frequency of scheduled TORs will increase. Accordingly, it is important that drivers are able to regain control and appropriate awareness of the driving situation such that they can handle the upcoming driving task safely. Related work shows that the reaction time to TORs is an indicator for safety and TOR quality <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b14">15]</ref>. Studies on TOR quality further show that reaction time and driving performance are influenced by the driving context (e.g., road curvature <ref type="bibr" target="#b15">[16]</ref> or traffic <ref type="bibr" target="#b16">[17]</ref>), driver behavior (e.g., engagement in secondary tasks <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b17">18]</ref>, driver state (e.g., fatigue <ref type="bibr" target="#b18">[19]</ref>), and TOR modality (e.g., visual, vibrotactile, or auditory <ref type="bibr" target="#b19">[20]</ref>).</p><p>These findings highlight that for safe takeovers, a holistic understanding of the current driving situation and the state of the driver is important to trigger context-dependent TORs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Leveraging Attentive User Interface to Improve Take-Over Requests</head><p>Attentive User Interfaces (AUIs) are "computing interfaces that are sensitive to the user's attention" <ref type="bibr" target="#b20">[21]</ref>. These interfaces therefore adapt the type and amount of information displayed based on the attentional state of the user and/or the attentional demands of the environment <ref type="bibr" target="#b7">[8]</ref>. For example, due to the driver's current high stress level and the complex driving situation, an incoming call that's predicted to be of low urgency, may not be immediately put through, but rather suppressed until the driving situation allows it. Thus, AUIs can not only adjust the timing (e.g., as proposed by Wintersberger et al. <ref type="bibr" target="#b21">[22]</ref>) or the visual representation, but also consider the costs and benefits of conflicting actions by taking into account the driver's state and the driving situation <ref type="bibr" target="#b22">[23]</ref>.</p><p>DeGuzman et al. <ref type="bibr" target="#b7">[8]</ref> suggest that AUIs, that have been shown to effectively manage SA in manual driving, may be also beneficial in automated driving. The authors identify several strategies for adapting UIs to either optimize attentional demand or to redirect the driver's attention to the road. However, they argue that only little research exists that studies the effect of AUIs in automated driving. One of the few studies that show the potential of AUIs for automated driving is presented by Wintersberger et al. <ref type="bibr" target="#b8">[9]</ref> who argue that AUIs can improve take-over behavior. Their results show that AUIs improve driving performance, reduce the stress induced to drivers, and reduce the variance in the response times of scheduled TORs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">How Generative AI can Enhance TOR Quality</head><p>To effectively tailor the interventions to the driving situation and the driver's state, an intelligent TOR agent needs access to the driving automation features, the car sensors (e.g., cameras and radar sensors, the cabin cameras) and access to the in-vehicle Human-Machine Interfaces (HMIs) (e.g., infotainment system or head-up display). This information is already available in some modern production cars as shown in the works by Ebel et al. <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>. To personalize interventions, it is also necessary to access personal driver information such as calendar entries. We assume that this information is available by connecting the smartphone to the In-Vehicle Information System (IVIS). Below we present 5 ideas, on how TOR assistants can benefit from generative AI.</p><p>Figure <ref type="figure">1</ref>: A hypothetical scenario: A person interacting with their mobile phone while driving in a Level 3 automated car. The current driving situation is under control and there is no reason to trigger a take-over request. However, the intelligent TOR assistant has detected a traffic jam ahead that may require the driver to take over. Knowing that the driver is engaged in a task on the smartphone, the TOR assistant decides to play an AI-generated video of the upcoming traffic situation on the center stack touchscreen. The driver will subconsciously recognize the moving scene on the center stack touchscreen and be more aware of the upcoming traffic scenario. The increased situation awareness will lead to a an increase in take-over quality.</p><p>Interactive Scenarios Dynamic visual representations of scheduled TORs can improve the usability of TOR assistants <ref type="bibr" target="#b25">[26]</ref>. Whereas current research focuses on relatively simple visualizations that are primarily focused on the timing or priority of the TOR, we propose to use generative models such as DALL-E 3 <ref type="foot" target="#foot_0">1</ref> to generate dynamic scenarios that represent the upcoming driving situation. These scenarios can be displayed on the center stack screen as shown in Figure <ref type="figure">1</ref> <ref type="foot" target="#foot_1">2</ref> , on the head-up display, or on the dashboard. For example, when approaching a highway exit, an image or video sequence of the exit can be displayed, prompting the driver to make a decision. While these scenarios can be used in combination with a direct prompt, they can also be used to subtly prime the driver for an upcoming TOR by displaying dynamic content on the screen in the periphery of the driver's focus.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conversational Primers</head><p>Research suggests that conversational voice assistants and priming techniques can help to build appropriate SA and improve TOR quality <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b15">16]</ref>. We argue that LLMs can further increase this potential as the system can engage the driver in natural but brief situation-pendent conversations about the upcoming route or driving scenario. For example, a question such as "Looks like we're getting off the highway in 10 minutes. Have you driven this route before?" not only informs the driver of the upcoming TOR, but also indirectly prompts the driver to look at the road, thereby improving SA. This strategy can also be useful in situations where the system is uncertain whether a TOR will be triggered in the near future, as the driver may not even realize that the goal of the conversation was to redirect his attention to the road. This way, drivers won't be annoyed by false positives because they won't recognize them as such.</p><p>Context-Aware and Personalized TORs LLMs can provide concise, contextual descriptions or advice based on real-time sensor data. This information can be used, for example, to generate situation-based TORs: "We are approaching a construction zone on the right lane with a speed limit of 50 km/h, please take control". While current research suggests that context-aware warnings can lead to safer takeovers <ref type="bibr" target="#b28">[29]</ref>, these approaches can only detect predefined situations and are therefore limited to specific situations. By leveraging the vast amount of data generated by LLMs and object detection algorithms, TORs are no longer limited to these predefined degrees of freedom. Based on data from the cabin camera, TORs can be tailored not only to the driving situation, but also to the driver's state and current activity. The intelligent TOR assistant could tell the driver to put away the phone or tablet, arguing that there will be enough time after the construction zone to finish the current activity.</p><p>Subtle Nudges Nudging and persuasion can influence drivers to drive more economically <ref type="bibr" target="#b29">[30]</ref> and more safely <ref type="bibr" target="#b30">[31]</ref>. We argue that generative AI technology can be used to generate effective persuasion strategies for TORs. Based on the driver's past behavior and responses, the generative AI can create tailored priming interventions or use the information gathered from past conversations to persuade the driver to be more aware or take over earlier. For example, the assistant might mention the driver's daughter's soccer game to subtly appeal to the driver's sense of responsibility not to get too distracted. Ambient Scene Generation Ambient displays and audio cues are an effective measure to improve TOR quality <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b15">16]</ref>. While current approaches are more or less explicit, we propose that based on the current or upcoming driving situations, an intelligent agent can generate situation-specific ambient scenes. For example, it could subtly change the tone of the infotainment system, or generate soft ambient sounds that resemble the road or traffic to subconsciously focus the driver's attention on the driving environment. The same applies for ambient lighting. The assistant could gradually synchronize the car's interior lighting with the outside environment and traffic scene. Dynamic lightning patterns based on passing cars or upcoming situations can be generated and visualized using ambient light technology. A slight change in brightness or hue can alert the driver's senses without the driver being aware of the change.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed System Architecture</head><p>Figure <ref type="figure" target="#fig_0">2</ref> shows our proposed system architecture for an Intelligent TOR Assistant that can apply the TOR strategies introduced above. To fully enable these strategies, an intelligent TOR assistant must create a holistic representation of the driving situation and the driver's state based on various types of inputs. We argue that in order to holistically assess the driver's state and understand the driving scene, the intelligent TOR assistant needs to access cabin sensors (e.g., cabin camera or cabin microphone), vehicle sensors (e.g., vehicle speed, steering wheel behavior, or automation status), map information (e.g., current location, future route, or traffic), and V2X data (e.g., position and behavior of surrounding vehicles). This information is used to create a latent representation of the driver's state and the current driving scene, which is then used as input for the TOR generator.</p><p>Other inputs include the driver's digital footprint and interaction behavior. Digital footprint information describes all information available to the assistant about the driver's digital activities. This can include calendar entries or chat logs. Together with current and past interaction behavior (e.g., past conversations with the in-vehicle voice assistant or driving responses to TORs), this information forms the Digital Persona. This digital persona is learned individually for each driver, enabling personalized predictions tailored to the driver's preferences and skills.</p><p>The TOR Generator is the central unit of the intelligent TOR assistant. The TOR generator receives a representation of the current driver state and driving scene and combines this information with the digital persona to trigger context-sensitive, situation-aware, and personalized TORs. The TOR generator decides which of the above strategies is most appropriate for the current situation and triggers the Conversation Agent, Scenario Generator, or both. Based on the information received from the TOR generator, these two modules generate tangible outputs and communicate them to the driver via the appropriate output interfaces, the IVIS displays, the ambient lighting, the audio system, and the tactile interfaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion and Conclusion</head><p>We argue that key advantage of using generative AI for scheduled TORs is subtlety and persuasion. The interactions should be smooth, non-intrusive, and feel natural so that the driver's SA is maintained without the driver actively realizing that they're being assisted. The goal is not to make the driver dependent on the Intelligent TOR Assistant, but to use the new opportunities that generative AI methods provide to enhance the collaboration between driver and the automated driving system. While subtle cues can help drivers to maintain an appropriate level of SA, LLMs can also be used to generate eloquent and meaningful prompts that persuade the driver to be more attentive. Incorporating personal and situational information could not only improve in-situ TOR quality, but also change driver behavior in the long run.</p><p>For all of the strategies presented in this position paper, it is important to emphasize that TORs are safety-critical. Choosing an inappropriate modality or providing false or inaccurate information can have fatal consequences. This needs to be considered in future work, especially in light of current vulnerabilities of generative models such as hallucination, bias, and lack of explainability. In addition, the question of how to ensure that approaches using generative AI methods comply with regulations needs to be answered. Due to their non-deterministic nature, they can't be evaluated against standardized datasets to assess whether they are "good enough" to be used for safety critical applications <ref type="foot" target="#foot_2">3</ref> .</p><p>While some of the above strategies may seem dystopian at the time of this writing, a digital assistant that is intimately aware of user preferences and behaviors and can carry on a conversation as naturally as a human counterpart may be technically possible and socially acceptable in just a few years. However, research suggests that conversational agents that seem too human don't necessarily drive adoption. In fact, they may deter people from using the technology <ref type="bibr" target="#b32">[33]</ref>. Thus, implementing strategies such as the Subtle Nudges strategy is a challenging endeavor and more research is needed to enable systems such as the one presented in this position paper.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: System Architecture</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://openai.com/dall-e-3</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Some elements were generated using Adobe Illustrator's "Text to Vector Graphic" feature: https://www.adobe.com/ products/illustrator/text-to-vector-graphic.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Not to say that the question of what is "good enough" when it comes to automated driving has been answered yet.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">SAEJ3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles</title>
				<meeting><address><addrLine>Warrendale</addrLine></address></meeting>
		<imprint>
			<publisher>SAE)</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">Mercedes-Benz</forename></persName>
		</author>
		<ptr target="https://group.mercedes-benz.com/innovation/product-innovation/autonomous-driving/system-approval-for-conditionally-automated-driving.html" />
		<title level="m">Conditionally automated driving: First internationally valid system approval</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Toward Computational Simulations of Behavior During Automated Driving Takeovers: A Review of the Empirical and Modeling Literatures</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Mcdonald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alambeigi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Engström</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Markkula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vogelpohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dunne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yuma</surname></persName>
		</author>
		<idno type="DOI">10.1177/0018720819829572</idno>
	</analytic>
	<monogr>
		<title level="j">Human Factors: The Journal of the Human Factors and Ergonomics Society</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="642" to="688" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Transitioning to manual driving requires additional time after automation deactivation</title>
		<author>
			<persName><forename type="first">T</forename><surname>Vogelpohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kühn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hummel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gehlert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vollrath</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.trf.2018.03.019</idno>
	</analytic>
	<monogr>
		<title level="j">Transportation Research Part F: Traffic Psychology and Behaviour</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="464" to="482" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Impact of the driver&apos;s visual engagement on situation awareness and takeover quality</title>
		<author>
			<persName><forename type="first">P</forename><surname>Marti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jallais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koustanaï</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Guillaume</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mars</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.trf.2022.04.018</idno>
	</analytic>
	<monogr>
		<title level="j">Transportation Research Part F: Traffic Psychology and Behaviour</title>
		<imprint>
			<biblScope unit="volume">87</biblScope>
			<biblScope unit="page" from="391" to="402" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Take over Gradually in Conditional Automated Driving: The Effect of Two-stage Warning Systems on Situation Awareness, Driving Stress, Takeover Performance, and Acceptance</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1080/10447318.2020.1860514</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="352" to="362" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Evaluating the impacts of situational awareness and mental stress on takeover performance under conditional automation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Peeta</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.trf.2021.10.002</idno>
	</analytic>
	<monogr>
		<title level="j">Transportation Research Part F: Traffic Psychology and Behaviour</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="page" from="210" to="225" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Attentive User Interfaces: Adaptive Interfaces that Monitor and Manage Driver Attention</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Deguzman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kanaan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Donmez</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-77726-5_12</idno>
	</analytic>
	<monogr>
		<title level="m">User Experience Design in the Era of Automated Driving</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Riener</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Jeon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Alvarez</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">980</biblScope>
			<biblScope unit="page" from="305" to="334" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Attentive User Interfaces to Improve Multitasking and Take-Over Performance in Automated Driving: The Auto-Net of Things</title>
		<author>
			<persName><forename type="first">P</forename><surname>Wintersberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schartmüller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Riener</surname></persName>
		</author>
		<idno type="DOI">10.4018/IJMHCI.2019070103</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Mobile Human Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="40" to="58" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">High-Resolution Image Synthesis with Latent Diffusion Models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Rombach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blattmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Esser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ommer</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR52688.2022.01042</idno>
	</analytic>
	<monogr>
		<title level="m">2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<meeting><address><addrLine>New Orleans, LA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="10674" to="10685" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Betker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Goh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Timbrooks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Longouyang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Juntangzhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Joycelee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Yufeiguo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">†</forename><surname>Wesammanassra</surname></persName>
		</author>
		<title level="m">Improving image generation with better captions</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Yunx-Injiao</surname></persName>
		</editor>
		<editor>
			<persName><surname>Ramesh</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Automated Driving: A Literature Review of the Take over Request in Conditional Automation</title>
		<author>
			<persName><forename type="first">W</forename><surname>Morales-Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sipele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Léberon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">H</forename><surname>Tadjine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Olaverri-Monreal</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics9122087</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">2087</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Am I Driving or Are You or Are We Both? A Taxonomy for Handover and Handback in Automated Driving</title>
		<author>
			<persName><forename type="first">P</forename><surname>Wintersberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Green</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Riener</surname></persName>
		</author>
		<idno type="DOI">10.17077/drivingassessment.1655</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design: Driving Assessment 2017</title>
				<meeting>the 9th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design: Driving Assessment 2017<address><addrLine>Manchester Village, Vermont, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="333" to="339" />
		</imprint>
		<respStmt>
			<orgName>University of Iowa</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Takeover Time in Highly Automated Vehicles: Noncritical Transitions to and From Manual Control</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eriksson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Stanton</surname></persName>
		</author>
		<idno type="DOI">10.1177/0018720816685832</idno>
	</analytic>
	<monogr>
		<title level="j">Human Factors: The Journal of the Human Factors and Ergonomics Society</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="689" to="705" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A taxonomy of autonomous vehicle handover situations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mccall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mcgee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mirnig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Meschtscherjakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Louveton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Engel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tscheligi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.tra.2018.05.005</idno>
	</analytic>
	<monogr>
		<title level="j">Transportation Research Part A: Policy and Practice</title>
		<imprint>
			<biblScope unit="volume">124</biblScope>
			<biblScope unit="page" from="507" to="522" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">From reading to driving: Priming mobile users for take-over situations in highly automated driving</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Sadeghian</forename><surname>Borojeni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Heuten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Boll</surname></persName>
		</author>
		<idno type="DOI">10.1145/3229434.3229464</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services</title>
				<meeting>the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services<address><addrLine>Barcelona Spain</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">How Traffic Situations and Non-Driving Related Tasks Affect the Take-Over Quality in Highly Automated Driving</title>
		<author>
			<persName><forename type="first">J</forename><surname>Radlmayr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Farid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bengler</surname></persName>
		</author>
		<idno type="DOI">10.1177/1541931214581434</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the Human Factors and Ergonomics Society Annual Meeting</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="2063" to="2067" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Take over!&quot; How long does it take to get the driver back into the loop?</title>
		<author>
			<persName><forename type="first">C</forename><surname>Gold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Damböck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bengler</surname></persName>
		</author>
		<idno type="DOI">10.1177/1541931213571433</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Human Factors and Ergonomics Society Annual Meeting</title>
				<meeting>the Human Factors and Ergonomics Society Annual Meeting</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="1938" to="1942" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The Effect of Fatigue on Take-over Performance in Urgent Situations in Conditionally Automated Driving</title>
		<author>
			<persName><forename type="first">A</forename><surname>Feldhutter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ruhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Feierle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bengler</surname></persName>
		</author>
		<idno type="DOI">10.1109/ITSC.2019.8917183</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Intelligent Transportation Systems Conference (ITSC)</title>
				<meeting><address><addrLine>Auckland, New Zealand</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="1889" to="1894" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">The effects of takeover request modalities on highly automated car control transitions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Yoon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">W</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">G</forename><surname>Ji</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aap.2018.11.018</idno>
	</analytic>
	<monogr>
		<title level="j">Accident Analysis &amp; Prevention</title>
		<imprint>
			<biblScope unit="volume">123</biblScope>
			<biblScope unit="page" from="150" to="158" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Attentive User Interfaces</title>
		<author>
			<persName><forename type="first">R</forename><surname>Vertegaal</surname></persName>
		</author>
		<idno type="DOI">10.1145/636772.636794</idno>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="30" to="33" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Let Me Finish before I Take Over: Towards Attention Aware Device Integration in Highly Automated Vehicles</title>
		<author>
			<persName><forename type="first">P</forename><surname>Wintersberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Riener</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schartmüller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-K</forename><surname>Frison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Weigl</surname></persName>
		</author>
		<idno type="DOI">10.1145/3239060.3239085</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications</title>
				<meeting>the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications<address><addrLine>Toronto ON Canada</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="53" to="65" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Affective Automotive User Interfaces-Reviewing the State of Driver Affect Research and Emotion Regulation in the Car</title>
		<author>
			<persName><forename type="first">M</forename><surname>Braun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Alt</surname></persName>
		</author>
		<idno type="DOI">10.1145/3460938</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">On the forces of driver distraction: Explainable predictions for the visual demand of in-vehicle touchscreen interactions</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ebel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lingenfelder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vogelsang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aap.2023.106956</idno>
	</analytic>
	<monogr>
		<title level="j">Accident Analysis &amp; Prevention</title>
		<imprint>
			<biblScope unit="volume">183</biblScope>
			<biblScope unit="page">106956</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Exploring Millions of User Interactions with ICEBOAT: Big Data Analytics for Automotive User Interfaces</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ebel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Gülle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lingenfelder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vogelsang</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2307.06089</idno>
	</analytic>
	<monogr>
		<title level="m">AutomotiveUI &apos;23: 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications</title>
				<meeting><address><addrLine>Ingolstadt, Germany</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Preparing Drivers for Planned Control Transitions in Automated Cars</title>
		<author>
			<persName><forename type="first">K</forename><surname>Holländer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pfleging</surname></persName>
		</author>
		<idno type="DOI">10.1145/3282894.3282928</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia</title>
				<meeting>the 17th International Conference on Mobile and Ubiquitous Multimedia<address><addrLine>Cairo Egypt</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="83" to="92" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Exploring the benefits of conversing with a digital voice assistant during automated driving: A parametric duration model of takeover time</title>
		<author>
			<persName><forename type="first">K</forename><surname>Mahajan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Large</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Velaga</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.trf.2021.03.012</idno>
	</analytic>
	<monogr>
		<title level="j">Transportation Research Part F: Traffic Psychology and Behaviour</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="104" to="126" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Unlocking Safer Driving: How Answering Questions Help Takeovers in Partially Automated Driving</title>
		<author>
			<persName><forename type="first">X</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Feng</surname></persName>
		</author>
		<idno type="DOI">10.1177/21695067231192202</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Human Factors and Ergonomics Society Annual Meeting</title>
				<meeting>the Human Factors and Ergonomics Society Annual Meeting</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page">21695067231192202</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Enjoy the Ride Consciously with CAWA: Context-Aware Advisory Warnings for Automated Driving</title>
		<author>
			<persName><forename type="first">E</forename><surname>Pakdamanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Heo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Feng</surname></persName>
		</author>
		<idno type="DOI">10.1145/3543174.3546835</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications</title>
				<meeting>the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications<address><addrLine>Seoul Republic of Korea</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="75" to="85" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Acceptance of future persuasive in-car interfaces towards a more economic driving behaviour</title>
		<author>
			<persName><forename type="first">A</forename><surname>Meschtscherjakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wilfinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Scherndl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tscheligi</surname></persName>
		</author>
		<idno type="DOI">10.1145/1620509.1620526</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications</title>
				<meeting>the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications<address><addrLine>Essen Germany</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="81" to="88" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Nudging Drivers to Safety: Evidence from a Field Experiment</title>
		<author>
			<persName><forename type="first">V</forename><surname>Choudhary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shunko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Netessine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Koo</surname></persName>
		</author>
		<idno type="DOI">10.1287/mnsc.2021.4063</idno>
	</analytic>
	<monogr>
		<title level="j">Management Science</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page" from="4196" to="4214" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Assisting Drivers with Ambient Take-Over Requests in Highly Automated Driving</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Sadeghian</forename><surname>Borojeni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Heuten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Boll</surname></persName>
		</author>
		<idno type="DOI">10.1145/3003715.3005409</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications</title>
				<meeting>the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications<address><addrLine>Ann Arbor MI USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="237" to="244" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Understanding consumers&apos; acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption</title>
		<author>
			<persName><forename type="first">T</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Oliveira</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jbusres.2020.08.058</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Business Research</title>
		<imprint>
			<biblScope unit="volume">122</biblScope>
			<biblScope unit="page" from="180" to="191" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
