<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Utilizing Gaze Data in Learning:</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Robert Moro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Bielikova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Challenges of Using Gaze Data in Learning</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Slovak University of Technology in Bratislava, Faculty of Informatics and Information Technologies</institution>
          ,
          <addr-line>Ilkovičova 2, 842 16 Bratislava</addr-line>
          ,
          <country country="SK">Slovakia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Although a lot of attention has been dedicated towards improvement of the modeling of learners' knowledge within learning systems, recommendation, or personalization, there is less attention on improvement of the learning content itself and providing support to learning content creators. In addition, the complexity of learning systems requires utilization of novel sources of implicit feedback, such as gaze data in order to model learners' interactions in their entirety. In this poster paper, we present a framework for collection of gaze data and its utilization in the learning systems environment. We focus on the analysis of reading patterns for the detection of problematic parts of text and present results of a preliminary evaluation in a web-based learning system ALEF.</p>
      </abstract>
      <kwd-group>
        <kwd>learning system</kwd>
        <kwd>personalization</kwd>
        <kwd>implicit feedback</kwd>
        <kwd>eye tracking</kwd>
        <kwd>reading patterns</kwd>
        <kwd>learning content comprehension</kwd>
        <kwd>ALEF</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        and read, which parts are easy to understand and which need clarification or rewriting.
This can be also achieved by the analysis of gaze data; the existing body of work
found useful to focus on distinguishing between at least two forms: reading and
scanning (or skimming) the text [
        <xref ref-type="bibr" rid="ref3 ref6">6, 3</xref>
        ]. Less focus is given on detecting which parts of the
text are hard to comprehend for the learners [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This would be especially useful to
the learning content creators, because it has potential to help them to improve the
learning texts and in turn, improve learning experience of the learners.
      </p>
      <p>Although the eye tracking hardware is becoming more and more affordable, the
challenges associated with the utilization of the gaze data remain, namely:
─ Limited support of the dynamic web content, which is now prevalent in the modern
web-based learning systems, but complicates analysis of the learners’ fixations on
the areas of interests. The existing studies often limit their analysis only to the
static content which does not require scrolling and does not change with time.
─ Limited support of reading patterns detection which focuses mainly on the reading
vs. scanning, but does not identify the reading comprehensibility problems;
identification of these problems is crucial for improvement of the learning content.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Framework for Collection and Utilization of Gaze Data</title>
      <p>
        In order to address the presented challenges we have developed a framework for
collection of the gaze data extending the infrastructure described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. It consists of
Gaze Monitor, a desktop client that communicates with the eye tracker and sends the
gaze data to the server as well as to the web browser where the gaze data are
processed in real-time by a reusable JavaScript component. The component provides:
─ Enrichment of the gaze data by the DOM element of the web page that corresponds
to the gaze coordinates; the element is identified by a unique XPath string which
enables to unambiguously determine whether the gaze coordinates fall within any
of the defined areas of interest (AOIs). The fact that the user looks at a certain web
page element triggers a gaze event that can be processed in the real-time.
─ A communication with the Gaze API that enables to retrieve the AOIs defined for a
web page (or application), as well as pre-defined gaze events that should be
triggered if the preconditions are met, such as when the user does not fixate on the
specified area of interest over a given period of time. It also enables the users to
retrieve the gaze-based statistics for the areas of interest, e.g. the number of fixations,
o dwell time, and most importantly, the identified reading patterns for the AOIs.
The main advantage of our framework is that by providing a reusable JavaScript
component, it can easily extend any web-based application (learning system) so that it
can benefit from the eye tracking. For example, based on a more precise estimation of
a student’s active learning time by gaze analysis, we can improve the estimation of
the knowledge level related to the particular concepts presented in the learning object.
In addition, since the areas of interest are defined as elements of the web page, they
are robust to changes of their position or size; we can easily aggregate the gaze data
from different web pages of the same web-based system (e.g. navigation menu or
widgets that are usually present in all the pages of the system).
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Learning Content Comprehension Detection</title>
      <p>
        Using our framework, we have extended our web-based learning system ALEF [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
with the gaze data collection. We propose a method of reading patterns detection
based on [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Our modification of the original method in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] allows to detect not only
reading vs. scanning patterns, but also reading comprehension problems.
      </p>
      <p>
        The comprehension problems present a subset of the reading pattern identified by
the original method. We label a sequence of fixations (classified by the Tobii I-VT
fixation filter [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) as problematic to comprehend if the number of revisits (recurrent
saccades) of the given area is above the threshold value. We also consider the average
length of the fixations which is higher in case of a comprehensibility problem.
      </p>
      <p>We are interested not only in identifying the individual patterns on the per-user
basis which can be used for improvement of the user modeling within the learning
system, but also in providing an aggregate value of area comprehensibility. It serves as a
feedback to the learning content creator, suggesting which parts of text need rewriting
or rephrasing, thus leading to the improvement of the quality of the learning materials.
The proposed visualization of the text comprehensibility can be seen in Fig. 1.</p>
      <p>Each paragraph (area of interest) has its font colored with a shade of gray based on
the normalized comprehensibility value from the &lt;0; 1&gt; interval, where 0 means no
problems with comprehensibility and 1 means very problematic to comprehend. Thus,
the most problematic areas of text immediately stand out from the remaining parts.</p>
      <p>In order to determine the optimal parameters of the proposed algorithm, we started
collection of a reading dataset using Tobii TX300 eye tracker. Up to now we have
gathered the data from four participants. Each of them read four texts with different
tasks that included reading the text thoroughly, scanning and skimming the text to
find the required information, omitting fragments of the text or simulating
comprehensibility problems by re-reading parts of the text. Each of the four texts was divided
into multiple areas of interest where each paragraph of text corresponded to one area.
Additionally, during the post-processing, the recordings were segmented and
annotated with one of the following labels: reading, skimming, scanning, and re-reading. As
a result, each fixation is assigned one of these labels together with the information
whether it falls into any of the areas of interest. We continue to gather a larger dataset.
For this purpose, we plan to utilize our UX Lab that is equipped with 20 Tobii X2-60
eye trackers which allow the parallel collection of the gaze data.</p>
      <p>Acknowledgement. This work was partially supported by grants No. VG 1/0646/15,
No. KEGA 009STU-4/2014, and it was created with the support of the Research and
Development Operational Programme for the project “University Science Park of
STU Bratislava”, ITMS 26240220084, co-funded by the European Regional
Development Fund.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Olsen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The Tobii I-VT Fixation Filter: Algorithm description</article-title>
          . (
          <year>2012</year>
          ). Available online at http://www.tobii.com/eye-tracking-research/global/library/white
          <article-title>-papers/the-tobii-i-vt-fix ation-filter/</article-title>
          .
          <source>Cited 6th April</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Biedert</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dengel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elshamy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buscher</surname>
          </string-name>
          , G.:
          <article-title>Towards Robust Gaze-Based Objective Quality Measures for Text</article-title>
          .
          <source>In: ETRA '12: Proc. of the Symposium on Eye Tracking Research and Applications</source>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>204</lpage>
          . ACM, New York (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Biedert</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hees</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dengel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buscher</surname>
          </string-name>
          , G.:
          <article-title>A Robust Realtime Reading-Skimming Classifier</article-title>
          .
          <source>In: ETRA '12: Proc. of the Symposium on Eye Tracking Research and Applications</source>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>130</lpage>
          . ACM, New York (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bieliková</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.
          <article-title>ALEF: From Application to Platform for Adaptive Collaborative Learning</article-title>
          . In: Manouselis,
          <string-name>
            <surname>N.</surname>
          </string-name>
          et al. (eds.):
          <source>Recommender Systems for Technology Enhanced Learning</source>
          , pp.
          <fpage>195</fpage>
          -
          <lpage>225</lpage>
          , Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Busjahn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          et al.:
          <article-title>Eye Tracking in Computing Education</article-title>
          .
          <source>In: Proc. of the 10th Annual Conf. on Int. Computing Education Research</source>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          . ACM, New York (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Campbell</surname>
            ,
            <given-names>C.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maglio</surname>
            ,
            <given-names>P.P.:</given-names>
          </string-name>
          <article-title>A Robust Algorithm for Reading Detection</article-title>
          .
          <source>In: PUI '01: Proc. of the Workshop on Perceptive User Interfaces</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . ACM, New York (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kardan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conati</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Comparing and Combining Eye Gaze and Interface Actions for Determining User Learning with an Interactive Simulation</article-title>
          .
          <source>In: UMAP '13: Proc. of the 21th Int. Conf. on User Modeling</source>
          , Adaptation, and Personalization, LNCS
          <volume>7899</volume>
          , pp.
          <fpage>215</fpage>
          -
          <lpage>227</lpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Móro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daráž</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bieliková</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Visualization of Gaze Tracking Data for UX Testing on the Web</article-title>
          .
          <source>In: Hypertext '14 Extended Proc. of the 25th ACM Hypertext and Social Media Conference</source>
          , vol.
          <volume>1210</volume>
          .
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Navrat</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tvarozek</surname>
          </string-name>
          , J.:
          <article-title>Online Programming Exercises for Summative Assessment in University courses</article-title>
          .
          <source>In: CompSysTech '14: Proc. of the 15th Int. Conf. on Computer Systems and Technologies</source>
          , pp.
          <fpage>341</fpage>
          -
          <lpage>348</lpage>
          . ACM, New York (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>