<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Pedestrian Pace-Maker Light of Affecting Walking Speed</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hiroki Kawada</string-name>
          <email>kawada.hiroki@image.iit.tsukuba.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hidehiko Shishido</string-name>
          <email>shishido.hidehiko@image.iit.tsukuba.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yoshinari Kameda</string-name>
          <email>kameda@ccs.tsukuba.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Computational Sciences, University of Tsukuba</institution>
          ,
          <addr-line>1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Currently at Soka University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Master's program in Intelligent and Mechanical Interaction Systems, University of Tsukuba</institution>
          ,
          <addr-line>1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We propose a new method of presenting virtual objects of low attention that can affect the walking speed of pedestrians. We call the proposed virtual objects of low attention a Pedestrian Pace-maker Light, hereafter referred to as PPML. The advantages of PPML are that pedestrians can keep clear visibility of the frontal area for safety concerns in situations where PPML is presented in their view through AR glasses. PPML is a set of multiple lighting objects flowing in the direction of travel. Pedestrians perceive a sense of self-motion in the direction of travel through the vection effect caused by the PPML. The experiment was conducted to investigate whether the proposed PPML affects the walking speed of pedestrians or not. We have experimented with the six patterns of PPML. We confirmed that it can affect the walking speed of pedestrians.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Pedestrian assistance [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1,2,3</xref>
        ] using augmented
reality [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is the future technology to come.
Augmented reality is a technology that augments
the real world by adding virtual objects. Among
the research on pedestrian assistance using
augmented reality, there are some studies on
navigation [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5,6,7</xref>
        ] and notification of
surroundings [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8,9,10</xref>
        ].
      </p>
      <p>
        Speed is an important factor in walking. If an
intelligent and gentle user interface can affect
walking speed, a smooth flow of pedestrians
would be achieved in our society. If it can let the
walking speed down the walking speed, it could
improve the safety of pedestrians. Research on
pedestrian assistance using augmented reality to
affect walking speed and direction has been done
[
        <xref ref-type="bibr" rid="ref11 ref12">11,12,13</xref>
        ]. We have to be aware of the
importance of keeping clear visibility of the
frontal area for safety concerns in walking. Virtual
objects of low attention should be used when
presenting virtual objects in an augmented reality
intelligent assistance system. Therefore, a new
method that affects the walking speed of
pedestrians using the presentation of virtual
objects of low attention is awaited.
      </p>
      <p>In the case of augmented reality intelligent
assistance systems, the effect of virtual objects
and the clear visibility of the frontal area of the
pedestrian should happen simultaneously. So, the
presented virtual objects should not interfere with
the visibility of the front area for safety concerns
in walking. Note that humans cannot gaze at two
objects simultaneously [14]. The virtual objects to
be placed in the pedestrian view should attract low
attention and keep the clear visibility of the frontal
area of pedestrians. Yet the low attention should
be sufficient to make influence on their walking.</p>
      <p>We propose a new method of presenting virtual
objects of low attention that can affect the walking
speed of pedestrians. We call the proposed virtual
objects of low attention a Pedestrian Pace-maker
Light, hereafter referred to as PPML. The
advantage of PPML is that pedestrians can keep
clear visibility of the frontal area for safety
concerns in situations where PPML is presented
in their view through AR glasses. PPML is a set
of multiple-lighting objects flowing in the
direction of travel. Pedestrians perceive a sense of
self-motion in the direction of travel through the
vection effect produced by the PPML. The
walking speed of pedestrians can be affected by
adapting the speed of PPML.</p>
      <p>We compose the PPML with a set of multiple
tiny cubic virtual objects. The PPML does not
interfere with the visibility of the front area of
pedestrians. Pedestrians can easily avoid
obstacles and check their surroundings even when
the PPML is presented.</p>
      <p>The experiment was conducted to investigate
whether the proposed PPML method affects
walking speed or not. We used two shapes and
colors of PPML in the experiment; a vertical,
green cuboid and a horizontal, white cuboid. In
addition, we set three speeds of PPML in the
experiment; slow, medium, and fast. We
compared the performance of these six types of
PPML and presented the results and discussion.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>When using augmented reality to present virtual
objects, the visibility of the front area of
pedestrians should be kept clear, and the virtual
objects should not attract too much attention.
These two factors are important for pedestrians to
walk safely. When presenting virtual objects,
there exists an occlusion problem in which virtual
objects overlap with the real world [15].</p>
      <p>
        There are studies in which virtual objects affect
walking speed and direction, but the visibility of
the frontal area for safety concerns was not
discussed [
        <xref ref-type="bibr" rid="ref11 ref12">11,12,13</xref>
        ]. In the study by Yoshikawa
et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a rightward-moving striped vector field
was displayed on the floor, and pedestrians
walking on it were guided in the right direction. In
the study by Lee et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the walking speed of
pedestrians is reduced by placing an avatar that
prevents pedestrians from walking in the direction
of travel. In the study by Guinet et al. [13], the
walking speed of pedestrians is increased and
maintained by flowing virtual objects in the
direction of travel and making the pedestrians
follow them. These studies effectively affect
walking speed and direction by presenting virtual
objects. However, presenting virtual objects of
high attention makes it difficult to keep clear
visibility of the frontal area.
      </p>
      <p>There is an approach to reduce the attention to
virtual objects. In the study of navigation by
Tamura et al. [16], they propose a method called
"Active Patterns." This method uses multiple
objects flowing in the guiding direction. In their
experiment, the gaze information of pedestrians
was analyzed, and a questionnaire evaluation of
the understandability of the navigation was
conducted. As a result, it was found that both
navigational clarity and walking safety can be
achieved.</p>
      <p>Our idea of PPML is inspired by the Pace-maker
light for cars. The Pace-maker light has been
confirmed to affect the driving speed of cars and
can potentially improve traffic flow [17,18,19]. In
the study by Igaki et al. [17], simulations were
conducted in an environment with Pace-maker
light placed on the roadway, and it was confirmed
that Pace-maker light contributes to reducing the
number of traffic jams. In the study by Yanagihara
et al. [18], it is confirmed by simulations that cars
actively follow the flow when Pace-maker light is
placed on the roadway. In the study by Endo et al.
[19], it was confirmed that the Pace-maker light
reduces the overall traffic jam by about 20% on an
actual highway.</p>
    </sec>
    <sec id="sec-3">
      <title>3. PPML</title>
    </sec>
    <sec id="sec-4">
      <title>3.1. Deployment</title>
      <p>We propose PPML as a new method to both
effects the walking speed of pedestrians and keep
clear visibility of the frontal area of pedestrians.
To manage these two factors, a set of multiple tiny
cubic virtual objects flowing in the direction of
travel is selected as the PPML. Therefore,
pedestrians can keep clear visibility of the frontal
area as the virtual objects are too small to occlude
the real-world objects in their view. In addition,
pedestrians can recognize PPML in their
peripheral vision and perceive a sense of
selfmotion by presenting virtual objects on the side of
the frontal area. Figure 1 shows a snapshot of a
vertical, green PPML flowing in the direction of
travel.</p>
      <p>The reasons for this design of the PPML are as
follow. The human visual field is divided into two
regions; central vision and peripheral vision [20].
The central visual field is used to observe objects
and targets directly and is superior in recognizing
detailed information. On the other hand, the
peripheral visual field is a region that extends
outside of the central visual field and is superior
in recognizing movements and changes in a wide
area. Considering these characteristics, we design
the central vision for checking the walking
environment of pedestrians. In peripheral vision,
the pedestrians can perceive a sense of
selfmotion in the direction of travel by the PPML
because recognizing moving virtual objects in the
peripheral vision maximizes the vection effect
[21].</p>
    </sec>
    <sec id="sec-5">
      <title>Shape and color</title>
      <p>As for the shapes of the virtual objects that
compose the PPML, pedestrians may pay more
attention to the PPML if its shape is complex.
Therefore, we adopt a simple shape for the PPML.
A study [16] exists that investigated the effects of
three simple shapes, a sphere, a cuboid, and a
capsule, by flowing them in the direction of travel.
This study confirmed no significant difference in
the effects of the three. Therefore, the cuboid is
adopted as the simple shape in this study.</p>
      <p>Two kinds of colors are used for the objects that
compose the PPML. One is green, which has a
relaxing and reassuring effect on pedestrians [22].
The other is white, which is neutral for ordinary
urban scenes. In this study, we created four types
of PPML with different shapes and colors, as
shown in Figures 2 to 5.</p>
      <p>• Horizontal white PPML (Figure 2)
• Vertical green PPML (Figure 3)
• Horizontal green PPML (Figure 4)
• Vertical green PPML (Figure 5)</p>
    </sec>
    <sec id="sec-6">
      <title>4. System overview</title>
    </sec>
    <sec id="sec-7">
      <title>4.1. Development</title>
      <p>We implemented the PPML system to affect the
walking speed of pedestrians in the following
environment.</p>
      <sec id="sec-7-1">
        <title>AR glass device: Magic Leap 1</title>
      </sec>
      <sec id="sec-7-2">
        <title>Unity software: version 2020.3.42f1</title>
        <p>Lumin SDK: 0.25.0</p>
        <p>Lumin OS: 0.98.35</p>
        <p>Magic Leap 1[23] is an optical see-through type
head-mounted display that can be used while
walking. It has a freshening rate of 120 Hz and a
viewing angle of 30° vertically and 40°
horizontally. The system is developed using Unity
[24], and the application program runs in
standalone mode on Magic Leap 1. Figure 6 shows a
snapshot of Magic Leap 1.</p>
        <p>•
•
•
•
•
•
•</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Speed</title>
      <p>We plan to affect the walking speed of
pedestrians by adjusting the speed of the PPML.
When we plan to slow down the walking speed of
pedestrians, we present slower PPML than
walking speed. Similarly, if we plan to increase
the walking speed of pedestrians, we present
faster PPML than walking speed.</p>
      <p>The difference from the normal walking speed of
pedestrians is important in the selection of the
speed of the PPML. Here, three different speeds
of PPML are chosen; slow, medium, and fast
speed. Because the participants in the experiment
were males in their 20s, the medium speed was set
based on their average walking speed. The speed
values are set as follows.</p>
      <sec id="sec-8-1">
        <title>Slow speed: 44m/min</title>
      </sec>
      <sec id="sec-8-2">
        <title>Medium speed: 66m/min</title>
      </sec>
      <sec id="sec-8-3">
        <title>Fast speed: 100m/min</title>
        <p>4.2.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Implementation</title>
      <p>We initially prepared four types of PPML with
different shapes and colors, as shown in Figures 2
to 5. Before the experiment, we conducted a
preliminary experiment to select the two types of
PPML to avoid a long experiment procedure that
may result in unexpected results due to the feeling
of getting tired and having an unconscious attitude
to the experiment.</p>
      <p>In the preliminary experiment, we evaluated the
effects of the four PPML types on walking speed
by presenting the slow speed PPML. After the
experience, we put questionnaires and interviews.
The experimental results confirmed that the
horizontal, white PPML shown in Figure 2 has the
lowest effect of slowing down, and the vertical,
green PPML shown in Figure 5 has the highest
effect of slowing down. Therefore, we picked up
these two for the main experiment.</p>
      <p>As for the speed of PPML, we chose the three
different speeds described in section 3.3. In
addition, the following six PPMLs are selected for
the main experiment. Hereafter, the PPMLs are
referred to as follows.</p>
      <p>• Slow-H-White (SHW): Slow
Horizontal and White PPML
• Medium-H-White (MHW):
speed, Horizontal and White PPML
speed,</p>
      <sec id="sec-9-1">
        <title>Medium</title>
        <p>• Fast-H-White (FHW):
Horizontal and White PPML
• Slow-V-Green (SVG): Slow
Vertical, and Green PPML
• Medium-V-Green (MVG):
speed, Vertical, and Green PPML
• Fast-V-Green (FVG):
Vertical, and Green PPML</p>
      </sec>
      <sec id="sec-9-2">
        <title>Fast speed, speed,</title>
      </sec>
      <sec id="sec-9-3">
        <title>Medium</title>
      </sec>
      <sec id="sec-9-4">
        <title>Fast speed,</title>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>5. Experiment</title>
    </sec>
    <sec id="sec-11">
      <title>5.1. Procedure</title>
      <p>The main experiment was conducted to
investigate whether the proposed PPML method
affects walking speed or not.</p>
      <p>The participants are prepared with adult men to
adopt one average speed of walking throughout
the experiment [25]. We experimented with 12
male participants in their 20s who were familiar
with the presentation of virtual objects using an
optical see-through head-mounted display. All the
experimental procedures followed our university
rule, and it is certified by the ethics review
committee.</p>
      <p>We explained the PPML to the participants
beforehand and demonstrated the proposed
method to familiarize them. Then, each
participant experienced the six PPMLs in
different presentation orders. Each session
includes a short walk for around 12 meters
followed by a set of questionnaires shown in
Table 1. After all questionnaires were completed,
we had a certain time of interviews to obtain the
subjective opinions of the experimental
participants.</p>
      <p>The experiment was conducted in the corridor
environment shown in Figure 7. We measured the
time of a 10-meter walk along the corridor. The
difference between the time taken to pass the
2meter point and the 12-meter point is used to
measure the walking time. As a comparison, we
also measured the time required to walk 10 meters
without the presentation of the PPML. The
questionnaire items are shown in Table 1, and the
participants rated the three items on a 7-point
scale. Note that the actual questionnaire form is
written in Japanese, which is the native language
of the participants.</p>
      <p>We calculated the mean and standard deviation
of the obtained data and showed them in Figures
8 to 10. The vertical axis of Figure 8 shows the
time required for a 10-meter walk, and the vertical
axes of Figures 9 and 10 show the 7-level
evaluation. The horizontal axis indicates the score
without presenting PPML and the ones of the six
PPMLs. A one-sided t-test is performed on the
data. In the graph, [*] indicates a significant
difference of 5%, and [**] indicates a significant
difference of 1%.</p>
      <p>Figure 8 shows that there is a significant
difference of 1% between the fast and medium
speeds at the same vertical green PPML. The
same happens between the fast and medium
speeds at the same horizontal white PPML. In
addition, there is a significant difference of 5%
between the fast and slow speeds at the same
vertical green PPML, and the same between the
fast and slow speeds at the same horizontal white
PPML. It is confirmed that PPML affects the
walking speed of pedestrians. On the other hand,
there is no significant difference in the shape and
color of PPML.</p>
      <p>Figure 9 shows a significant difference of 1% for
all the cases of PPML speed. It is confirmed that
the slower the PPML speed is, the slower the
pedestrian’s PPML perceived speed is. As for the
shape and color of PPML, a significant difference
of 5% is found only between horizontal white
PPML and vertical green PPML at the same slow
speed. It is confirmed that the perceived speed of
PPML in horizontal white PPML may be slower
than vertical green PPML.</p>
      <p>Figure 10 shows a significant difference of 5%
between horizontal white PPML and vertical
green PPML in terms of the visibility of PPML. It
is confirmed that the visibility of vertical green
PPML is higher than horizontal white PPML.</p>
      <p>Figure 11 shows that there is no significant
difference in evaluating the difficulty in walking
by presenting PPML and that they are equally
evaluated. In addition, it was confirmed that the
PPML gave pedestrians no difficulty in walking.</p>
      <p>From Figure 8, it is confirmed that PPML affects
the walking speed of pedestrians. In addition, it is
confirmed that there was no significant difference
in the effect on the walking speed between
horizontal white PPML and vertical green PPML.
On the other hand, in the interviews, some
participants said that vertical green PPML had a
more substantial effect on the walking speed of
pedestrians than horizontal white PPML.
Therefore, further experimentation would be
needed to investigate how the PPML can affect
walking speed regardless of its shape and color.</p>
      <p>From Figure 9, it is confirmed that the slower the
PPML speed is, the slower the pedestrian’s PPML
perceived speed is. In addition, it is confirmed that
the perceived speed of PPML in horizontal white
PPML may be slower than vertical green PPML.
In the interviews, some participants said that some
participants perceived vertical green PPML to be
faster than horizontal white PPML. Therefore,
there were no discrepancies between the
interviews and the experimental results.</p>
      <p>From Figure 10, it is confirmed that the visibility
of vertical green PPML is higher than horizontal
white PPML. In the interviews, some participants
said that horizontal white PPML is more
challenging to see and less present than vertical
green PPML. Therefore, there were no
discrepancies between the interviews and the
experimental results.</p>
      <p>From Figure 11, it is confirmed that participants
evaluated the perceived walking difficulty equally
when presented with vertical green PPML and
horizontal white PPML. In the interviews, some
of the participants said that they did not feel any
difficulty walking with both PPMLs. Therefore, it
is indicated that the proposed method may be less
likely to give pedestrians a sense of difficulty in
walking.</p>
    </sec>
    <sec id="sec-12">
      <title>6. Conclusion</title>
      <p>We proposed the PPML as a new method of
presenting virtual objects of low attention. This
method can both affect the walking speed of
pedestrians and keep the clear visibility of the
frontal area of pedestrians.</p>
      <p>The experiment was conducted to investigate
whether the proposed PPML method affects
walking speed or not. From the experimental
results, it is confirmed that PPML affects the
walking speed of pedestrians. In addition, there is
no significant difference in the effect on the
walking speed of pedestrians between horizontal
white PPML and vertical green PPML. It is also
confirmed that the proposed method is less likely
to cause difficulty in walking.</p>
      <p>We think more investigation of the effect of
walking speed change by PPML should be
conducted. The test of clear visibility for safety
concerns should be done to very our proposed
approach is effective in practical situations too.</p>
      <p>Part of this research is supported by Kaken
21H03476.</p>
    </sec>
    <sec id="sec-13">
      <title>7. References</title>
      <p>virtual humans in AR, in: IEEE transactions postural control, in: Handbook of sensory
on visualization and computer graphics, physiology, 8(1978), pp. 755-804.
24(4), 2018, pp.1525-1534. [22] N. Kaya, and H. H. Epps, Relationship
doi:10.1109/TVCG.2018.2794074. between color and emotion: A study of
[13] A. L. Guinet, G. Bouyer, S. Otmane, and E. college students, in: College student journal,
Desailly, Towards an AR game for walking 38(3), 2004, pp. 396-405.
rehabilitation: preliminary study of the [23] Magic Leap, Inc, Magic Leap 1, viewed 20
impact of augmented feedback modalities on June 2023. URL:
walking speed, in: 2020 IEEE International
https://www.magicleap.com/ja-jp/magicSymposium on Mixed and Augmented leap-1.</p>
      <p>Reality Adjunct (ISMAR-Adjunct), 2020, pp. [24] Unity Software, Inc, Unity, viewed 20 June
264-268. 2023. URL: https://unit.com/.
doi:10.1109/ISMAR- [25] M. M. Samson, A. Crowe, P. L. De Vreede,
Adjunct51615.2020.00075. J. A. Dessens, S. A. Duursma, and H. J.
[14] J. Palmer, Attentional limits on the Verhaar, Differences in gait parameters at a
perception and memory of visual information, preferred walking speed in healthy subjects
in: Journal of Experimental Psychology. due to age, height and body weight, in: Aging
Human perception and performance, 16(2), clinical and experimental research, 13(2001),
1990, pp. 332-350. pp. 16-21.</p>
      <p>doi:0.1037//0096-1523.16.2.332. doi:10.1007/BF03351489.
[15] M. M. Wloka, and B. G. Anderson,</p>
      <p>Resolving occlusion in augmented reality, in:
Proceedings of the 1995 symposium on
Interactive 3D graphics, 1995, pp. 5-12.</p>
      <p>doi:10.1145/199404.199405.
[16] Y. Tamura, H. Shishido, and Y. Kameda,</p>
      <p>Evaluation of active patterns on direction
instruction for pedestrians, 2022. URL:
https://ceur-ws.org/Vol-3297/short4.pdf.
[17] T. Igaki, and T. Uchida, Micro traffic
simulation to examine congestion-control
measures using pace maker lights, Memoirs
of the Faculty of Engineering Osaka City
University, 59(2018), pp. 7-17.</p>
      <p>doi:10.24544/ocu.20190823-003.
[18] M. Yanagihara, K. Hiraki, and H. Oneyama,</p>
      <p>An analysis of effects on traffic flow
considering the difference of following
behaviors on moving light guide system, in:
JSTE Journal of Traffic Engineering, 6(2),
2020, pp. A_55-A_62.</p>
      <p>doi:10.14954/jste.6.2_A_55.
[19] M. Endo, H. Nakagawa, M. Fukase, and D.</p>
      <p>Hashimoto, The measures against traffic
congestion in Tokyo Wan Aqua-Line
EXPWY, in: Proceedings of the 34th Traffic
Engineering Research Paper, 2014, pp.
255261.
[20] H. Strasburger, I. Rentschler, and M. Jüttner,</p>
      <p>Peripheral vision and pattern recognition: A
review, in: Journal of vision, 11(13), 2011.</p>
      <p>doi:10.1167/11.5.13.
[21] J. Dichgans, Visual-vestibular interactions:</p>
      <p>Effects on self-motion perception and</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sundareswaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Behringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McGee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Zahorik</surname>
          </string-name>
          ,
          <article-title>3D audio augmented reality: Implementation and experiments</article-title>
          ,
          <source>in: The Second IEEE and ACM International Symposium on Mixed and Augmented Reality</source>
          ,
          <year>2003</year>
          . Proceedings,
          <year>2003</year>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>297</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR.
          <year>2003</year>
          .
          <volume>1240728</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Norouzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schubert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Erickson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailenson</surname>
          </string-name>
          , and G. Welch,
          <article-title>Walking your virtual dog: Analysis of awareness and proxemics with simulated support animals in augmented reality</article-title>
          ,
          <source>in: 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>168</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR.
          <year>2019</year>
          .
          <volume>000</volume>
          -
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mulloni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Seichter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Schmalstieg</surname>
          </string-name>
          ,
          <article-title>User experiences with augmented reality aided navigation on phones</article-title>
          ,
          <source>in: 2011 10th IEEE international symposium on mixed and augmented reality</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>229</fpage>
          -
          <lpage>230</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR.
          <year>2011</year>
          .
          <volume>6092390</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Carmigniani</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Furht</surname>
          </string-name>
          ,
          <article-title>Augmented reality: An overview</article-title>
          , in: Handbook of augmented reality,
          <year>2011</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>46</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4614</fpage>
          -0064-
          <issue>6</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>M. L. Wu</surname>
            , and
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Popescu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Efficient</surname>
            <given-names>VR</given-names>
          </string-name>
          and
          <article-title>AR navigation through multiperspective occlusion management</article-title>
          ,
          <source>in: IEEE transactions on visualization and computer graphics</source>
          ,
          <volume>24</volume>
          (
          <issue>12</issue>
          ),
          <year>2017</year>
          , pp.
          <fpage>3069</fpage>
          -
          <lpage>3080</lpage>
          . doi:
          <volume>10</volume>
          .1109/TVCG.
          <year>2017</year>
          .
          <volume>2778249</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gerstweiler</surname>
          </string-name>
          , E. Vonach, and H. Kaufmann,
          <article-title>HyMoTrack: A mobile AR navigation system for complex indoor environments</article-title>
          , in: Sensors,
          <volume>16</volume>
          (
          <issue>1</issue>
          ),
          <fpage>17</fpage>
          ,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .3390/s16010017.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lee</surname>
          </string-name>
          , and J. Han,
          <article-title>SafeAR: AR alert system assisting obstacle avoidance for pedestrians</article-title>
          ,
          <source>in: 2019 IEEE International Symposium on Mixed and Augmented</source>
          Reality
          <string-name>
            <surname>Adjunct (ISMAR-Adjunct</surname>
            <given-names>)</given-names>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>82</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR-Adjunct.
          <year>2019</year>
          .
          <volume>00035</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kang</surname>
          </string-name>
          , and J. Han,
          <article-title>SafeXR: Alerting walking persons to obstacles in mobile XR environments</article-title>
          , in: The Visual Computer,
          <volume>36</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>2065</fpage>
          -
          <lpage>2077</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00371-020-01907-4.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nanda</surname>
          </string-name>
          , U. Gruenefeld,
          <string-name>
            <given-names>T.</given-names>
            <surname>Stratmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Heuten</surname>
          </string-name>
          ,
          <article-title>Ensuring safety in augmented reality from trade-off between immersion and situation awareness</article-title>
          ,
          <source>in: 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>79</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR.
          <year>2018</year>
          .
          <volume>00032</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Palmisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Allison</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Schira</surname>
            , and
            <given-names>R. J.</given-names>
          </string-name>
          <string-name>
            <surname>Barry</surname>
          </string-name>
          ,
          <article-title>Future challenges for vection research: Definitions, functional significance, measures, and neural bases</article-title>
          , in: Frontiers in Psychology,
          <volume>6</volume>
          (
          <year>2015</year>
          ). doi:
          <volume>10</volume>
          .3389/fpsyg.
          <year>2015</year>
          .
          <volume>00193</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yoshikawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hachisu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fukushima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Furukawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kajimoto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Nojima</surname>
          </string-name>
          ,
          <article-title>Studies of vection field II: A method for generating smooth motion pattern</article-title>
          ,
          <source>in: Proceedings of the International Working Conference on Advanced Visual Interfaces</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>705</fpage>
          -
          <lpage>708</lpage>
          . doi:
          <volume>10</volume>
          .1145/2254556.2254689.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Bruder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Höllerer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Welch</surname>
          </string-name>
          ,
          <article-title>Effects of unaugmented periphery and vibrotactile feedback on proxemics with</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>