<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Teleoperation of highly automated vehicles in public
transport: User-centered design of a human-machine interface for remote-operation and its expert usability
evaluation. Multimodal Technologies and Interaction</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>You, Me, and the AV: Designing Interactions between Remote Operators of Autonomous Vehicles and Road Users</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Felix Tener</string-name>
          <email>felix.tener@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joel Lanir</string-name>
          <email>ylanir@is.haifa.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Avishag Boker</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Haifa</institution>
          ,
          <addr-line>Ha-Namal 67, Haifa, 3303221</addr-line>
          ,
          <country country="IL">Israel</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>5</volume>
      <issue>5</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The advent of autonomous vehicles (AVs) presents both opportunities and challenges. While AVs promise enhanced traffic safety and efficiency by mitigating human errors, they also encounter complex, unpredictable scenarios - such as unusual road conditions, unexpected obstacles, or sensor failures - that challenge their autonomous decision-making. In addition, regulatory frameworks further necessitate human oversight in specific situations. Teleoperation, which allows remote human operators to assist or intervene when needed, has emerged as a critical safety and regulatory mechanism. While having various advantages, AVs introduce a critical communication gap with pedestrians and other road users (RUs). Without clear signaling, pedestrians may struggle to interpret an AV's intent, raising concerns about safety and trust. Research on external Human-Machine Interfaces (eHMIs) tackles this issue by enabling AVs to convey their status and intentions through various modalities such as lights, displays, or projections, facilitating safer interactions with their surroundings. The current research aims to investigate how eHMI solutions can be adapted for and integrated with teleoperation, enabling remote operators (ROs) to communicate with pedestrians and other RUs effectively. By bridging this gap, our work aims to enhance trust, safety, and the overall efficacy of AV interactions in real-world environments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Human-centered computing</kwd>
        <kwd>Human-computer interaction</kwd>
        <kwd>Interaction design</kwd>
        <kwd>Automobile</kwd>
        <kwd>Teleoperation</kwd>
        <kwd>External human-machine Interfaces(eHMIs)</kwd>
        <kwd>Research through design</kwd>
        <kwd>User-centered design</kwd>
        <kwd>Human-AI collaboration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Recent strides in technological progress within domains such as computer vision, sensor fusion, and
artificial intelligence have facilitated the rapid advancement of AVs as an innovative and
transformative mode of transportation. Prominent automotive manufacturers, alongside emerging
startups, are actively developing diverse cutting-edge technologies to realize AV’s capability of
independent operation. Nevertheless, the current state of AV development reveals their inability to
handle every conceivable road scenario autonomously. Instances such as road construction,
malfunctioning traffic lights, or a busy intersection might prevent an AV from moving autonomously</p>
      <p>Several teleoperation systems for AVs are presently operational and are undergoing further research
and refinement by diverse automotive enterprises (e.g., DriveU1).</p>
      <p>For autonomous vehicles to be accepted by the public and integrated into urban traffic, it is
critical that AVs can communicate and show their status and intent to pedestrians and other RUs.
Surveys of the public’s perception of AVs revealed concerns about safety, liability, and interaction
with pedestrians and other RUs [12]. Besides controlling the vehicle’s movements, driving is a social
act that requires communication and understanding between all RUs to ensure traffic flow and
guarantee safety [23]. Social interaction plays a vital role in resolving traffic ambiguities. For
example, if a driver needs to enter a busy junction, she might wait for another driver’s signal before
entering. A small gesture or eye gaze often communicates this signal. Another example is a
pedestrian crossing the street first making eye contact with the driver before crossing the road to
ensure safe passage [11].</p>
      <p>The current paper introduces and outlines the scope for a novel field of research: ways in which
an AV controlled by a remote operator can communicate and interact with other RUs, such as
pedestrians, cyclists, compound guards, law enforcement representatives, and drivers of manually
operated vehicles (see Figure 1). Both communication of awareness of an AV (i.e., what it detects
and identifies) and the communication of its intent (i.e., what actions the AV is about to take) are
known to be critical for AVs to convey to other RUs [20]. Existing works have explored various
mechanisms for AVs to communicate awareness and intent. These eHMIs include displaying text
messages on external displays, using LED lights to convey messages, using laser projections on the
street, using personalized messages for smart and wearable devices, and more [4]. We propose
extending these works and exploring how a teleoperator, monitoring or controlling a remote vehicle,
can use different eHMI interfaces to communicate with other RUs.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical Background and Previous Research</title>
      <sec id="sec-2-1">
        <title>2.1. Teleoperation of autonomous vehicles</title>
        <p>Autonomous vehicle teleoperation is a relatively new field of research. Although there might be long
periods of self-driving in highly automated vehicles, it is widely acknowledged today that AVs
cannot handle all road situations [22,27]. The underlying assumption is that there are many
exceptional situations that a vehicle might encounter when driving. These situations may occur, for
example, due to perception problems (i.e., bad weather), because the car encounters an unknown
situation (i.e., an animal blocks the road), rules or regulations (need to cross a continuous separation
line), or because the AV cannot unambiguously determine the situation. Furthermore, in automation
level 4, the vehicle is still limited to some aspects, and regulations might require humans to perform
certain actions. Despite the advancements in AV sensors and AI algorithms, humans still have
higher-level interpretation skills for complex or novel situations. Thus, for a vehicle to operate in
automation level 4 or 5, where no human is behind the steering wheel, an RO must be available to
interpret edge case scenarios and remotely intervene when a problem occurs.</p>
        <p>Teleoperation of AVs involves an RO who can oversee and govern the vehicle’s actions from a
distance. The RO can be located in a remote operation center and may assist many AVs during a
single teleoperation shift. Various companies develop teleoperation systems for AVs today [10,31].
Companies such as Ottopia3, Phantom.auto4, and DriveU5 are all developing teleoperation solutions
that AV manufacturers can use to help get AVs on the road. Most of these stations focus on
supporting remote driving using a teleoperation station that includes a steering wheel, pedals, and
screens to show real-time video from several cameras, having the teleoperator remotely drive the
vehicle. However, studies have shown that it is very challenging to drive vehicles remotely [26].
Issues such as latency, lack of physical sensing, impaired situational awareness, and cognitive load
make remote driving challenging. Tele-assistance, or remote assistance as it is sometimes called, may
alleviate some of these problems. In tele-assistance, the remote operator provides high-level
guidance, entrusting the execution of low-level maneuvers to the AV. The operator still receives the
video feed and information from the AV. However, remote vehicle control is done through interface
commands rather than direct driving [2,8].</p>
        <p>Several works investigated the requirements for AV teleoperation [9]. The following works have
started looking at the design of such interfaces, designing prototypes and specific techniques for
tele-assistance [13,28,29]. Among the issues raised, communication with pedestrians, drivers, and
other RUs was highlighted as a major challenge for teleoperation [27,29].</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. External Human-Machine Interfaces</title>
        <p>Much effort has been devoted to having AVs identify other Rus, such as pedestrians and cyclists, so
vehicles can be aware of their presence, make decisions, and act accordingly [3]. However, it is also
critical for pedestrians and other RUs to understand and anticipate the AV’s behavior.
Communication between drivers and pedestrians can help prevent accidents, reduce ambiguities,
and increase trust [17]. Today, human drivers use various communication mechanisms such as hand
gestures, eye contact, or vehicle aids such as honking or high beam lighting to convey their
awareness and intent. However, as the role of the driver changes in an AV and does not necessarily
require constant attention to the road and the surroundings, the driver may be engaged in
nondriving tasks or might not be there at all, and therefore, a human might not be available to interact
with other RUs [30].</p>
        <p>
          To enhance communication between AVs and pedestrians or other RUs, various external
humanmachine interfaces have been proposed for highly autonomous vehicles [24,25]. These devices are
3 https://ottopia.tech/
4 https://phantom.auto/
5 https://driveu.auto/product/driveu-300/
designed in multiple forms (displays on the vehicle showing texts or icons, LEDs, light bands,
speakers, etc.). They can display various types of information, such as the vehicle’s driving mode,
the vehicle’s intent, awareness of the pedestrians, and more [1,11]. Roughly speaking, there are four
categories for eHMI devices: (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) Interfaces that reside on the vehicle; (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) Interfaces that reside on the
vehicle and road infrastructure; (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) Interfaces that reside on the vehicle and the pedestrian (e.g., by
using the mobile phone of the pedestrian to convey messages), and (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) Interfaces that reside in
conjunction with the vehicle, street infrastructure, and the pedestrian. eHMIs were shown to
positively affect perceived safety and trust and positively impact indicators such as pedestrian
decision-making, gap acceptance, and crossing timing [14,15]. However, these effects were usually
examined in simple scenarios, for example, with a typical scenario of a pedestrian trying to cross the
road with an AV slowly approaching, signaling whether it is about to stop.
        </p>
        <p>Given the variety of eHMIs, it remains unclear how a remote teleoperator can use them to
communicate with pedestrians. A remote operator may have a broader view of the situation and may
be able to convey more information to various RUs. For example, a remote operator who takes over
an AV because of road construction may want to communicate to the road workers that he noticed
the road markings and the workers. Another example is an RO who might want to communicate
with a police officer who controls the traffic at an intersection with a malfunctioning traffic light,
acknowledging that a human controls the AV. The fact that a remote human controls the AV changes
the Human-Machine Interaction in an eHMI to a Human-to-Human Interaction mediated through a
machine , as two humans communicate on both ends. This can allow a richer communication space
than the simple and specific messages usually conveyed in eHMIs. We aim to explore this space,
investigating the best ways technology supports this type of interaction.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Research Space Definition</title>
      <p>The fact that a human-to-human interaction is mediated through a machine (AV) raises a multitude
of research questions. However, most of these questions can be divided into three primary
interconnected realms.</p>
      <sec id="sec-3-1">
        <title>3.1. Defining the role of the machine in the interaction</title>
        <p>We believe the AV can act as an enabler, an augmenter, or a mediator. When acting as an enabler,
the AV enables the interaction between the RO and the RUs, similar to a telephone, which provides
the necessary infrastructure to help two distant humans talk with one another. One example of such
interaction can be video-based communication between an RO and a pedestrian, which is enabled
through a video feed of the RO that may be displayed on top of the AV’s windshield. Such an
interaction resembles a video call via Skype or Zoom video conferencing tools, with the main
difference being that the screen is placed on an AV.</p>
        <p>When acting as an augmenter, the machine adds to the RO’s capabilities without necessarily
detaching between the RO and the RU. For instance, if the AV’s road is blocked and the RO wants to
bypass the obstacle, she might communicate her intention via an audio channel (e.g., say, “Bypassing
from Left”) and then plot an alternative route to bypass an obstacle. This route may be projected on
the roadway to communicate the AV’s intentions to pedestrians and other human drivers. In this
example, the AV uses a computation and projection system that can translate a two-dimensional
route, plotted using the RO’s graphic user interface, into a light-based trajectory in the physical
proximity of the AV. By doing so, the machine augments the human’s communication capabilities
but doesn’t necessarily detach between the two humans.</p>
        <p>On the contrary, when acting as a mediator, the machine mediates all communication, and there
is no direct video or audio communication between the ROs and the RUs. For instance, the RO might
type a message she wants to deliver to the pedestrians who are located around the vehicle, and the
AV can display the message on the AV’s body or designated screens. Another example is an RO’s
voice message that can be translated into a different dialect or language compatible with pedestrian
needs.</p>
        <p>The roles mentioned above may be predefined but can also be adaptive to the use case and the
actors involved. For instance, the RO might start by typing a message visible on the car’s body,
making the AV a mediator. Nevertheless, if pedestrians do not correctly understand the message, the
RO can choose to activate a direct communication channel, making the AV an enabler for more
effective communication.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Designing the interaction method and user interface</title>
        <p>The intervention scenario and the involved actors will dictate the interaction method and require
specific affordances and communication channels from the system. For example, suppose the AV
blocks someone’s vehicle, and the blocked vehicle owner wants to talk with the RO to resolve the
issue. In this case, there should be a way for the vehicle owner to initiate a conversation with an RO
and a means for the latter to respond. This can be implemented, for example, with a button on the
AV’s body. Following the button press, an audio-based channel with an RO in the teleoperation
station should be supported.</p>
        <p>Another example is an AV’s malfunction. Let’s assume that the RO wants to notify all the
surrounding road users about the problem and its estimated resolution time. One possible way is for
the RO to type an appropriate message, which the AV can then project on a designated screen.
Finally, if the RO wants to plot a route and project it on the road ahead of the AV, it should have a
specialized projection system to support this.</p>
        <p>
          While there are many possible intervention use cases [27] and a variety of creative ways to
resolve them, it is evident that when designing teleoperation-eHMI solutions, one will have to
consider (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) The Machine’s role, (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) Interaction modalities (e.g., video channel), (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) The teleoperation
station affordances, (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) The AVs interface with the surrounding world (e.g., projection lights), and
possibly (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ) The limitations (age, disabilities, etc.) and capabilities of the various road users that
might be in contact with the AV.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Evaluating the influence of the interaction on the RO and the RUs</title>
        <p>The interaction and interface design of the communication methods between the RO and the RUs
can potentially affect RU’s road behavior and ROs performance. Thus, it would be necessary to
evaluate any designed solution thoroughly. First, exploring how design decisions influence trust,
safety, and social dynamics is essential. For instance, do RUs trust AV technology more if direct video
communication between the RO and RUs is enabled compared to if it is presented textually? What is
the safest way to transmit a message from the RO to the RUs in an urgent situation? Will the RU’s
ability to talk with an RO through the machine change behavioral norms and social interactions? In
other words, will it be normal for people to speak with AVs on the street?</p>
        <p>Another interesting research angle is exploring the effect of the interactions mentioned above
on the RO. How do design decisions influence RO’s cognitive load, situational awareness, and
efficiency? Is it more efficient to talk with a guard who should open a compound gate than type a
message that will be projected on the AV-mounted screen? How do these types of interactions affect
the RO's required skill set?</p>
        <p>Finally, the interaction might influence and be influenced by ethical and regulatory
considerations. It might not be clear who is legally and ethically responsible for misinterpretations
or accidents arising from machine-mediated human interactions. Should there be global standards
for AV-mediated communication, or should it be adaptable to different cultural and urban
environments? How do privacy concerns impact live teleoperator communication (e.g., voice, video,
personalized messages) in AVs? These and many other aspects of the interaction, along with the
machine’s role in it, may be explored in future research.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>AVs are poised to revolutionize urban mobility by enhancing safety, reducing traffic congestion, and
promoting sustainable transportation. However, to gain public trust and achieve seamless
integration into urban environments, AVs must communicate effectively what they perceive and
their intentions to RUs. eHMI is crucial for bridging this communication gap, ensuring AVs can
seamlessly interact with pedestrians, cyclists, and drivers. While much AV research has focused on
technical challenges such as perception, localization, and control, less emphasis has been placed on
AV-human interaction, particularly in teleoperation. By integrating eHMIs into teleoperation
systems, ROs can convey information and intent to pedestrians, cyclists, law enforcement, and other
RUs. This approach transforms AV communication from a machine-to-human paradigm into
humanto-human interaction mediated through a machine , fostering greater transparency, trust, and social
acceptance of AV technology.</p>
      <p>By bridging the gap between teleoperation, eHMI, and AV-RU interactions, we aspire to progress
advancements in intelligent transportation systems. As AVs transition from experimental
technology to widespread adoption, ensuring seamless human oversight and effective
communication will be crucial in fostering public confidence and achieving safer, more efficient
autonomous mobility.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This research was funded by the Israeli Ministry of Innovation, Science, and Technology through
project number 0008064.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT-4-turbo and Grammarly in order
to: Grammar and spelling check. After using these tool(s)/service(s), the author(s) reviewed and
edited the content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Claudia</given-names>
            <surname>Ackermann</surname>
          </string-name>
          , Matthias Beggiato, Sarah Schubert,
          <string-name>
            <given-names>and Josef F.</given-names>
            <surname>Krems</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>An experimental study to investigate design and assessment criteria: What is important for communication between pedestrians</article-title>
          and
          <source>automated vehicles? Applied Ergonomics</source>
          <volume>75</volume>
          :
          <fpage>272</fpage>
          -
          <lpage>282</lpage>
          . https://doi.org/10.1016/j.apergo.
          <year>2018</year>
          .
          <volume>11</volume>
          .002
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Bogdoll</surname>
          </string-name>
          , Stefan Orf, Lars Töttel, and
          <string-name>
            <given-names>J. Marius</given-names>
            <surname>Zöllner</surname>
          </string-name>
          .
          <year>2021</year>
          .
          <article-title>Taxonomy and Survey on Remote Human Input Systems for Driving Automation Systems</article-title>
          . Retrieved from http://arxiv.org/abs/2109.08599
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Long</given-names>
            <surname>Chen</surname>
          </string-name>
          , Shaobo Lin, Xiankai
          <string-name>
            <surname>Lu</surname>
          </string-name>
          , Dongpu Cao, Hangbin Wu, Chi Guo, Chun Liu, and Fei Yue Wang.
          <year>2021</year>
          .
          <article-title>Deep Neural Network Based Vehicle and Pedestrian Detection for Autonomous Driving: A Survey</article-title>
          .
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          <volume>22</volume>
          , 6:
          <fpage>3234</fpage>
          -
          <lpage>3246</lpage>
          . https://doi.org/10.1109/TITS.
          <year>2020</year>
          .2993926
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Debargha</given-names>
            <surname>Dey</surname>
          </string-name>
          , Azra Habibovic, Andreas Löcken, Philipp Wintersberger, Bastian Pfleging, Andreas Riener, Marieke Martens, and
          <string-name>
            <given-names>Jacques</given-names>
            <surname>Terken</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles' external human-machine interfaces</article-title>
          .
          <source>Transportation Research Interdisciplinary Perspectives</source>
          <volume>7</volume>
          . https://doi.org/10.1016/j.trip.
          <year>2020</year>
          .100174
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Vinayak</surname>
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Dixit</surname>
            , Sai Chand, and
            <given-names>Divya J.</given-names>
          </string-name>
          <string-name>
            <surname>Nair</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Autonomous vehicles: Disengagements, accidents and reaction times</article-title>
          .
          <source>PLoS ONE 11</source>
          , 12. https://doi.org/10.1371/journal.pone.0168054
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Francesca</given-names>
            <surname>Favarò</surname>
          </string-name>
          , Sky Eurich, and
          <string-name>
            <given-names>Nazanin</given-names>
            <surname>Nader</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Autonomous vehicles' disengagements: Trends, triggers, and regulatory limitations</article-title>
          .
          <source>Accident Analysis and Prevention</source>
          <volume>110</volume>
          :
          <fpage>136</fpage>
          -
          <lpage>148</lpage>
          . https://doi.org/10.1016/j.aap.
          <year>2017</year>
          .
          <volume>11</volume>
          .001
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Jiacheng</given-names>
            <surname>Feng</surname>
          </string-name>
          , Shengbo Yu, Guihua Chen, Weijie Gong,
          <string-name>
            <given-names>Qiao</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Juejian</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Haiting</given-names>
            <surname>Zhan</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Disengagement causes analysis of automated driving system</article-title>
          .
          <source>Proceedings - 2020 3rd World Conference on Mechanical Engineering and Intelligent Manufacturing</source>
          ,
          <string-name>
            <surname>WCMEIM</surname>
          </string-name>
          <year>2020</year>
          :
          <fpage>36</fpage>
          -
          <lpage>39</lpage>
          . https://doi.org/10.1109/WCMEIM52463.
          <year>2020</year>
          .00014
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Frank</given-names>
            <surname>Ole</surname>
          </string-name>
          <string-name>
            <surname>Flemisch</surname>
          </string-name>
          , Klaus Bengler, Heiner Bubb, Hermann Winner, and
          <string-name>
            <given-names>Ralph</given-names>
            <surname>Bruder</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Towards cooperative guidance and control of highly automated vehicles: H-Mode and Conduct-by-</article-title>
          <string-name>
            <surname>Wire</surname>
          </string-name>
          .
          <source>Ergonomics</source>
          <volume>57</volume>
          ,
          <fpage>343</fpage>
          -
          <lpage>360</lpage>
          . https://doi.org/10.1080/00140139.
          <year>2013</year>
          .869355
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>