<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Environments</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Klen Čopič Pucihar</string-name>
          <email>klen.copic@famnit.upr.si</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marc Anthony Berends</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jordan Aiko Deja</string-name>
          <email>jordan.deja@famnit.upr.si</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nuwan T Attygalle</string-name>
          <email>nuwan.attygalle@famnit.upr.si</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matjaž Kljun</string-name>
          <email>matjaz.kljun@upr.si</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>De La Salle University Manila</institution>
          ,
          <country country="PH">Philippines</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Information Studies</institution>
          ,
          <addr-line>Novo Mesto</addr-line>
          ,
          <country country="SI">Slovenia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Primorska, Faculty of Mathematics</institution>
          ,
          <addr-line>Natural Sciences and Information Technologies, Koper</addr-line>
          ,
          <country country="SI">Slovenia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Depth of Field (DoF) has been used in 3D software to imitate realistic vision to improve immersion and depth perception on 2D displays. However, traditional methods of introducing DoF use fixed focus point which is usually located in the center of the screen. This may lead to unwanted blur that could afect user immersion and game satisfaction. In this paper, we present GazeHD, a dynamic DoF system that uses eye tracking in order to actively focus at the position of user gaze whilst blurring other parts of the screen based on geometry of 3D environment. We evaluate dynamic DoF by running a user study ( = 5 ) including a tunnel test and a 3D game demonstration. The results show DoF does not improve depth perception. This was true for both mouse controlled and eye tracking controlled DoF. However users perceived higher immersion which also persisted in complex 3D scenes such as high fidelity first person video games.</p>
      </abstract>
      <kwd-group>
        <kwd>depth of field</kwd>
        <kwd>eye tracking</kwd>
        <kwd>tunnel test</kwd>
        <kwd>unity</kwd>
        <kwd>3D game</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In order to imitate realistic vision in 3D games, software can try to simulates the depth of field
(DoF) efect. It is applied to the scene camera generating imagery where objects in the scene
are either blurred or sharp. The amount of blur is dependent on the properties of the camera,
focusing point, and the 3D geometry of the scene (i.e. the distance between the object and the
camera). This kind of visual distortion is intrinsic to our vision system so its introduction to 3D
graphics may lead to a higher immersion when experiencing such virtual environments.</p>
      <p>
        However, the standard implementation of DoF commonly uses a fixed focal point that is
positioned in the center of the screen. In this way objects in the center of the screen are always
in focus as the user moves though the 3D environment. The correct focal length is calculated
based on the distance between the observer(i.e. scene camera) and the scene center point (i.e.
the intersection point between the camera raycast and the surface in front of the camera) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
However, if a user wants to look at content that is away from the screen centre, such content
may be invisible due to blur. This potentially breaks the immersion of the experience, as the
image does not accommodate for where the user is looking, and thus fails to fully imitate
realistic human vision. Besides breaking the illusion this may also have a negative efect on
depth perception.
      </p>
      <p>
        Several studies have explored depth perception in 3D virtual environments . In the study
by Naceri et al. the authors studied users’ depth perception in 3D virtual environments [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
They compared two diferent Virtual Reality (VR) systems: the head mounted devices and
immersive wide screen displays. The comparison was done by presenting a virtual environment
containing diferent objects and asking the participants to compare their depth. The objects
shown were placed at diferent depths, however their size was modified so that they always
appeared to be of the same size. To achieve this, the size of the object was changed according to
the depth position. This was done to eliminate the apparent size efects that would serve as a
depth cue. The results showed significant diferences between the two devices and highlighted
the distance misestimation phenomenon for head mounted devices. Other studies explored
depth perception and immersion in the scope of stereoscopic 3d rendering [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and 3D controlled
DoF in stereoscopic displays [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Another study looked at the efect DoF on immersion in 3D
games [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Advances in low cost gaze-tracking technologies, such as Tobii Eye tracker 5 make it possible
to track human gaze at an afordable cost in close to real time. This makes it possible to build a
dynamic DoF system in which the focus point moves with user gaze. In study conducted by
Mauderer et al. authors explored dynamic DoF and showed that it can lead to an increase in
the perceived realism and can contribute to the perception of ordinal depth. Furthermore it
also improved the distance between objects, however the authors found this is limited in its
accuracy [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>In this paper we attempt to verify this previous result on dynamic depth and extend it by
exploring if such an efect can also be observed in situations where 3D objects are used and
where the user is experiencing complex 3D scenes, such as high fidelity first person video
games. Within this context we want to find out: (1) If eye tracking controlled DoF improves depth
perception accuracy?, and (2) If eye tracking controlled DoF is preferred and ofers higher immersion
when compared to fixed and no DoF systems? To answer these questions we design and run a
user study with 5 participants running two diferent tasks: a Tunnel Test and a 3D game called
Spaceship Demo. The method, results and discussion are provided in sections hereafter.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>In this section we explain the method followed in the user study covering apparatus, task
description, study design, study procedure and data collection techniques.</p>
      <p>Apparatus: An application integrated with the Tobii 4C Eye Tracker tool was developed
using Unity. We used a display with a resolution of 1920 px × 1080 px at a size of 53 cm × 30 cm,
and with a refresh rate of 60 frames per second. The eye tracker scans and estimates the user’s
head position and gaze at a frequency of 90Hz. Throughout the experiment the user was sitting
at the desk where mouse and keyboard were provided for interaction see Figure 1.</p>
      <p>Task and Study Design: We chose a within-subject design which has two independent
variables: DoF mode and feedback. We compared three DoF modes: no DoF, mouse controlled
DoF where the focus point moved with mouse pointer and eye tracking DoF where the focus
point moved with gaze. In respect to feedback we compared conditions with and without
feedback. The feedback was shown as text popup indicating if the user correctly completed
the task. The feedback was included into the study design in order to explore learning efect.
We were interested in finding out if users are capable of improving their performance when
feedback is provided.</p>
      <p>The dependent variables were score which indicates how many times the user successfully
completed the task, total duration which indicates the total amount of time the user spent on
the task and questionnaire response.</p>
      <p>
        We run two tasks: Tunnel Test and Spaceship Demo. In the Tunnel Test the goal was to
measure depth perception where the only depth cue is DoF. A 3D scene was generated showing
two spheres placed at diferent depths. The size of spheres is scaled so that they appear of equal
size forming symmetry inside a tunnel (see Figure 1 left). The user was then asked to indicate
which sphere was closer. This method has been previously used by [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], however within their
experimentation they did not use untextured 3D objects (e.g. spheres), but instead used 2D
surfaces with relatively complex textures.
      </p>
      <p>
        The Spaceship Demo was built upon an open source game [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We modified the game to
enable all DoF modes. The game is a first person game controlled with a mouse and keyboard.
The players are tasked to navigate tough 3D environment which helps them progress through a
ifxed story line. The story lasts for approximately 5 minutes, where the player can navigate
and explore the virtual environment freely. In this task we only collected qualitative data. We
composed questionnaires based on methods used in the works of [
        <xref ref-type="bibr" rid="ref1 ref3 ref5">1, 5, 3</xref>
        ].
      </p>
      <p>Participants and Study Procedure We recruited  = 5 university students as test subjects
via convenience sampling. The study started with a brief explanation of the study goals
and consent form approval. Participants were then sat in front of the computer. We then
conducted the 5 point eye tracking calibration after which the first test (Tunnel Test) started.
The participants were shown how to interact with the system after which the data capture
started. In each DoF modes, the order of which were randomized and counterbalanced, the
user repeated the task 20 times. We vary the dificulty of the task creating 4 levels. The higher
the level the closer together are the two objects. This in tehory makes it more dificult to
ifgure which out which object is closer. After completing the task the user answered a sort
questionnaire. Afterwards, the same process was repeated with feedback enabled. This meant
that the users were informed about the correctness of their answer after each task repetition.</p>
      <p>The final test was Spaceship Demo test. The users played the game in each DoF modes, the
order of which were randomized and counterbalanced. After completing the test the users filled
in a questioner.</p>
      <p>
        Data Collection: Thought Tunnel Test we collected task time and task completion score.
At the end of each condition we also collected questionnaire responses. We inquired on the
following topics: level of comfort, dificulty of estimating distance of objects, level of immersion
and dificulty of navigating the scene. We followed the metrics and scales used in the study of
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>The results of Tunnel Test show there is no significant learning in any of the conditions
(see Figure 2 top left and top middle graphs). This is true for both no-feedback and feedback
conditions. When observing the results of total duration (see Figure 2 top right) we see the users
performed the task faster in no DoF condition compared to mouse and eye tracking controlled
DoF conditions. The results for task performance (see Figure 2 bottom row) show that none
of the modes managed to consistently outperform the random selection. Furthermore there is
no clear distinction in quantitative performance between the three modes we compared. The
qualitative results collected in the form of responses to the questionnaires showed that users
think the most compelling depth is available in eye tracking controlled DoF condition, however
the diference is very small compare to no DoF condition. Furthermore, the no DoF condition
was chosen as the most popular mode.</p>
      <p>In the Spaceship demo, the mean rating for navigation of the 3D environment, is highest
for mouse controlled DoF followed closely by no DoF condition. Eye tracking controlled DoF
has the highest mean ratings in questions 2 and 4, regarding the viewing comfort and level
of immersion, respectfully. The rankings of the conditions in the Spaceship Test from best to
worst, according to participants show that eye tracking controlled DoF was voted highest.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and Conclusion</title>
      <p>In this research we explored the efects of eye tacking controlled DoF in 3D environments,
compared to manually(mouse) controlled DoF. By using eye tracking controlled DoF, we keep
the gaze point in focus which in turn imitates real life vision. We designed an experiment that
measured both depth perception accuracy, and subjective preference for diferent aspects of 3D
environments. We failed to find evidence the DoF improves depth perception. This was true for
both mouse controlled DoF and eye tracking controlled DoF. However when considering users
preferences our research shows that DoF can increase immersion. Furthermore we show this is
true also true in complex 3D scenes, such as high fidelity first person video games. However, it
is important to note that this study is limited with the number of participants, which prevented
us from running statistical tests. Therefore these findings are of preliminary nature and should
be corroborated by extending the user base.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hillaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lécuyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cozot</surname>
          </string-name>
          , G. Casiez,
          <article-title>Depth-of-field blur efects for first-person navigation in virtual environments</article-title>
          ,
          <source>in: Proc. of ACM VRST</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>203</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Naceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chellali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dionnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Toma</surname>
          </string-name>
          ,
          <article-title>Depth perception within virtual environments: Comparison between two display technologies</article-title>
          ,
          <source>International Journal On Advances in Intelligent Systems</source>
          <volume>3</volume>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I. K.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Peek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Wünsche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutteroth</surname>
          </string-name>
          ,
          <article-title>Enhancing 3d applications using stereoscopic 3d and motion parallax</article-title>
          ,
          <source>in: Proc. of the AUIC</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vinnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Allison</surname>
          </string-name>
          ,
          <article-title>Gaze-contingent depth of field in realistic scenes: The user experience</article-title>
          ,
          <source>in: Proc. of ETRA</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>119</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hillaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lécuyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cozot</surname>
          </string-name>
          , G. Casiez,
          <article-title>Using an eye-tracking system to improve camera motions and depth-of-field blur efects in virtual environments</article-title>
          ,
          <source>in: Proc. of IEEE VR, IEEE</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mauderer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Conte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Nacenta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vishwanath</surname>
          </string-name>
          ,
          <article-title>Depth perception with gazecontingent depth of field</article-title>
          ,
          <source>in: Proc. of ACM CHI</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>217</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Iche</surname>
          </string-name>
          ,
          <article-title>The spaceship demo project using vfx graph and highdefinition render pipeline</article-title>
          ,
          <year>2022</year>
          . URL: https://blog.unity.com/technology/ now
          <article-title>-available-the-spaceship-demo-project-using-vfx-graph-and-high-definition-render.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>