<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>K. Sakamoto);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Estimation of Dificulty When Reading VR-based Educational Comics Using Gaze, Facial Movement, Heart Rate, and Electroencephalography⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hiroyuki Ishizuka</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kenya Sakamoto</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shizuka Shirai</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yutaro Hirao</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Monica Perusquia-Hernandez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hideaki Uchiyama</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kiyoshi Kiyokawa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Nara Institute of Science and Technology</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Osaka University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Using the metaverse in education can improve motivation, presence, engagement, immersion, interest, and performance. However, automated recognition of progress or learning dificulties is still a challenge. In this study, we search for relevant biometric features that can be used to estimate the user's self-reported dificulty level of educational comic books (manga) materials in Virtual Reality (VR). Educational manga has been an efective learning tool for a wide range of fields. It has unique spatial features that are unavailable in traditional approaches such as browsing history or log data analysis. Our approach uses facial expressions, electrocardiogram (ECG), and electroencephalogram (EEG) data, in addition to eye gaze, which has been used previously, to estimate dificulty when reading VR-based educational manga. We measured learners' data while reading manga materials in a VR space and estimated the learner's self-reported dificulty at two levels (challenging and accessible) using Support Vector Machine (SVM) and Random Forest (RF) algorithms. As a result, the RF approach using facial expressions and ECG features achieved an accuracy of 0.97 and an F1 score of 0.94 on the two-level dificulty estimation task.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Dificulty Estimation</kwd>
        <kwd>Biosignals</kwd>
        <kwd>Virtual Reality</kwd>
        <kwd>Afective Computing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>3. Data set</title>
      <p>
        vtiiosnuahleelpxsplaosrsaetsisolneaarnnderrse’acdoimngpr[e1h0e]n.sEioyne ignazeeduincafotiromnaal- $/01 '()*+" %2!"#'$()%&amp;*'+#()#*+%2 '()* , '()*+" ,&amp;&amp;-'+(#)+*%-+#&amp;()#*+ '()* ,
smeatttiinognsto[1e1s]t.imSaaktea mthoetodificeutlatyl. luevseedloefyree-atrdainckgimnga ningfaor- -
materials in a VR environment [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. .
      </p>
      <p>
        Facial expression information has been used to eval- ! "&amp; " &amp; $ " &amp; $ # # # %
uate learner engagement using machine learning
techniques [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Nakamura et al. estimated the dificulty of an Figure 1: The experiment consists of three phases: baseline,
English word test using a combination of facial features reading, and annotation. Note that the manga sample in this
and system operation logs [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. figure is a dummy; it was not used in the actual experiment.
      </p>
      <p>
        Heart rate information obtained from ECG has been
used as a physiological method for internal boundary
conditions [
        <xref ref-type="bibr" rid="ref14 ref15 ref9">9, 14, 15</xref>
        ]. Heart rate increases with task perceived dificulty level of each panel in the annotation
demands and memory load [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. HRV, the variability of the phase.
      </p>
      <p>
        RR interval of the heartbeat, is used to assess stress [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        RMSSD of HRV, unafected by respiration [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], correlates 3.3. Procedure
with parasympathetic activity and stress. The 20th and
80th percentiles of the HRV data are used to estimate Participants were briefed on the experiment’s purpose,
the state of stress and relaxation. pNN20 and pNN50 the VR environment, and the data collection process.
reflect vagus nerve [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. These indices are often adequate They were informed that eye gaze, facial expression,
for estimating a learner’s self-reported dificulty when heart rate, and EEG information would be recorded
durreading manga materials in a VR environment. ing the learning process. A questionnaire was
adminis
      </p>
      <p>
        Previous studies on EEG-based cognitive load as- tered to determine the participants’ prior knowledge of
sessment have focused on  and  frequency bands. immunology and VR experience. Afterward, participants
 wave activity is linked to frontal lobe activity and put on the HMD and practiced operating the interface
working memory capacity, while  wave activity is re- in the VR environment using manga materials diferent
lated to cognitive attention, arousal, and memory capac- from those used in the experiment.
ity [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ]. Studies measuring EEG in VR environments Next, the participants wore the ECG, followed by the
have reported increased cognitive load in such environ- EEG. Gel was used to adjust the impedance to 30 kΩ
ments [
        <xref ref-type="bibr" rid="ref19 ref20">20, 19</xref>
        ]. or lower. Afterward, the participants were asked to put
on the HMD again and confirm that eye gaze and face
tracking were working correctly.
3.4. Apparatus
      </p>
      <sec id="sec-2-1">
        <title>We conducted a data collection experiment in a VR space using educational manga about immunology to develop a model for dificulty estimation.</title>
      </sec>
      <sec id="sec-2-2">
        <title>We used an HTC VIVE Pro Eye and an HP VR Backpack</title>
        <p>G2 with an i7-8850H processor and NVIDIA GeForce 2080
GDDR6 8GB in the experiment (Fig. 2). The software
3.1. Participants consisted of Unity version 2019.2.3.f1. The HMD’s
eyeNine students (eight males, one female, 22-24 years old, tracker and SRanipal 1.3.2.0 SDK were used to collect
eyemean 23 years old, standard deviation 0.3) with no prior tracking information. The VIVE Facial Tracker collected
knowledge of immunology were recruited. Two partici- facial expression information as 38 diferent blend shapes,
pants with missing data were excluded from the analysis. including lips, chin, and cheeks, at 60 Hz with a response
This study was approved by our institution’s IRB. time of 10 ms. Shimmer31 was used to collect ECG. Two
Shimmer terminals were used for the measurement and
synchronization with Unity. An EEGo sports2, a portable
3.2. Task mobile EEG machine, was used for the EEG measurement.
The 32 wet electrodes of the EEG cap were configured in
the 10-20 system. EEG data were recorded via the EEGo
sports software.</p>
        <p>There were three phases: baseline, reading, and
annotation (see Fig 1). Initially, a dark screen was displayed
for three minutes to establish a baseline, during which
the participants were asked to relax. Then, they learned
the contents by reading the immunology manga
material in a VR space during the reading phase. After that,
they went through the same material and labeled the</p>
      </sec>
      <sec id="sec-2-3">
        <title>1https://shimmersensing.com/product/shimmer3-ecg-unit-2/ 2https://www.ant-neuro.com/products/eego_sports</title>
        <p>VIVE Facial Tracker
were taken with electrodes attached following the
standard five-lead placement method. The electrodes used
for the EEG measurements were Fz, F3, F4, Cz, C3, C4,
POz, Pz, P3, P4, M1, and M2, arranged according to the
international 10-20 method. After the caps were attached,
the gel was inserted, and the impedance was confirmed
to be less than 30 kΩ before measurement began.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Analysis</title>
      <p>Shimmer</p>
      <sec id="sec-3-1">
        <title>Features were extracted from sliding windows separately</title>
        <p>
          Figure 2: Image showing an experiment setup. Participants per modality [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. The window slid in 0.5-second steps
wore the VIVE Pro Eye, the VIVE Facial Tracker, the EEG Cap, for all modalities.
and Shimmer3. Facial Expression Features. BlendShapes associated
with face region movements were used. The resting state
was set to zero. We calculated the minimum, average,
3.5. Stimuli and maximum values for each five-second window by
summing the values of 38 BlendShapes. We also
calcuWe used a manga entitled “Understanding Immunol- lated the change in the total value and used the minimum,
ogy through Manga”, published by Ohmsha [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], which average, and maximum values as the feature values. The
granted permission for us to use the manga material. reason for using this method is that learners are likely to
Ohmsha’s Manga “de Wakaru series” is widely recog- make lip thrusting or mouthing movements when
readnized for its easy-to-understand explanations and manga ing a dificult passage, even if they do not have a visibly
introducing various fields such as science, electricity/- sad or frustrated expression.
electronics, machinery, and architecture. Chapters 1-3 ECG Features. NeuroKit2 [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] was used to read the
(76 pages) were used in this experiment. ECG data, clean the signal, and perform peak detection.
The window size was 20 seconds. Features were output
3.6. Measurements for each window. Based on previous work, the features
used were mean heart rate, HRV minimum, maximum,
3.6.1. Dificulty Annotations mean, standard deviation, RMSSD, 20th and 80th
perThe time participants spent per panel within one page centiles, pNN20, and pNN50.
was not constant due to the diverse dificulty levels of the EEG Features. The EEGo sports has an output of
panels. Thus, we asked them to annotate their perceived 0.5∼ 30Hz band-pass filter. The output data included
dificulty of each panel at three levels: “easy,” “dificult but Trigger information sent from Unity. Then, the noise
understandable,” and “dificult and not understandable.” was removed using Independent Component Analysis
They could record the dificulty levels using a ray-casting (ICA) in MNE-Python [25], and Morlet wavelets were
interaction to mark dificult vignettes while reading. To used to perform time-frequency analysis at 3∼ 13Hz. The
avoid diferences in the definitions of each label among average frequency power of the  wave (4∼ 8Hz) and
the participants, before the experiment, participants were the  wave (8∼ 12Hz) were calculated. The window size
asked to rate a panel as “dificult but understandable” if was then set to two seconds, and the power of the 
they understood the manga after reading it; and “dificult and  waves within the window was calculated. We
but not understandable” if they did not understand the hypothesize that  activity is higher and  activity is
manga contents. If they were unsure, the participants lower when the learner perceives dificulty.
were asked to rate it as “dificult but understandable.” Classification. To validate the proposed method,
This paper reports the results of estimating the user’s we constructed a model with two classes, “easy” and
self-reported dificulty based on the “easy” and “other” “dificult,” employing Support Vector Machine (SVM)
classifications. After each chapter, the NASA TLX Ques- and Random Forest (RF) algorithms. We fitted
subjecttionnaire was used to assess general workload. independent models using leave-one-subject-out
crossvalidation. Sixteen diferent combinations were
com3.6.2. Physiological Data pared in the subject-independent model, each consisting
of either the presence or absence of eye gaze, facial
exThirty-eight diferent BlendShapes corresponding to the pression, electrocardiogram, and EEG features. Accuracy,
lips, chin, cheeks, and other body parts were collected for Precision, Recall, and F1 scores were used to evaluate
facial expression information [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. ECG measurements performance.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Results</title>
      <p>Self-reported workload. Table 1 shows the results of
each questionnaire, including the dificulty reported in
each chapter. For simplicity, the Likert scale is
considered an interval scale. It was observed that the dificulty
level of Chapter 1 was the lowest. The dificulty level
of Chapters 2 and 3 increased because the chapters
contained more content and specialized vocabulary as they
progress. The level of fatigue was also rated higher in
the last chapters.</p>
      <p>Number of Labels for Learner’s self-reported
Evaluation. Table 2 shows the percentage of the
learner’s self-reported evaluation labels per panel and per
window of the collected manga teaching materials. While
only 8.12% of the panels were labeled as “dificult,” this
proportion increased to 30.0% when the time unit was
taken into account. This suggests that participants spent
more time on panels that were perceived as “dificult.”</p>
      <p>Accuracy of Learner’s Self-reported Dificulty
Estimation. When all features were used for each
participant, the results were as follows: accuracy of 77%, F1
score of 77% for SVM, accuracy of 94%, and F1 score of
94% for RF. All combinations of biometric information
were analyzed to identify the most relevant features for
dificulty estimation. Accuracy, Precision, Recall, and
F1 scores are shown in Table 3. Accuracy, F1 score, and
Recall were 96.5%, 94.0%, and 91.2%, respectively, when
facial expression and electrocardiograms were combined,
and precision was 97.8%, the highest accuracy when
gaze, facial expression, and electrocardiograms were
combined.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Discussion and Conclusion</title>
      <sec id="sec-5-1">
        <title>We investigated eye gaze, facial expression, electrocar</title>
        <p>diogram, and electroencephalogram (EEG) features to
estimate the learner’s self-reported dificulty level in
reading educational manga. We collected data while reading
educational manga in a VR environment and evaluated
the estimation accuracy using time-based SVM and RF.
Accuracy and F1 scores reached 0.965 and 0.940,
respectively, using the RF approach with facial expressions and
electrocardiogram features. The estimation accuracy was
improved when the proposed method was used for facial
expressions and electrocardiograms and decreased when
EEG features were used.</p>
        <p>Compared to the results obtained by estimating gaze
features alone, the combination of gaze and facial
expression and gaze and electrocardiogram improved the
accuracy of dificulty estimation and F1 score. However,
both estimation accuracy and F1 score were lower for the
combination of gaze and EEG. Estimation accuracy and
F1 score were higher when all features were used except
EEG features. This is probably because the EEG
signalto-noise ratio was low when simultaneously wearing
an HMD. Future work should improve
head-movementrelated noise reduction on the EEG. Combining facial
expression and ECG features produced the highest
estimation accuracy and F1 score. This indicates that the ECG
Numbers in parentheses denote standard deviation (SD).
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓</p>
        <p>Acc.
features are important for estimating dificulty. These
two features can be collected more quickly than others
and are independent of learning materials. Therefore,
they can be easily applied to adaptive learning systems
in VR environments, not limited to educational manga
materials.</p>
      </sec>
      <sec id="sec-5-2">
        <title>NeuroKit2: A python toolbox for neurophysiologi</title>
        <p>cal signal processing, Behavior Research Methods
53 (2021) 1689–1696.
[25] A. Gramfort, M. Luessi, E. Larson, D. A. Engemann,
D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks,
L. Parkkonen, M. S. Hämäläinen, MEG and EEG
data analysis with MNE-Python, Frontiers in
Neuroscience 7 (2013) 1–13.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. L.</given-names>
            <surname>Wang</surname>
          </string-name>
          , Metaverse in education: Contributors, cooperations, and research themes,
          <source>IEEE Transactions on Learning Technologies</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <article-title>Constructing an edu-metaverse ecosystem: A new and innovative framework</article-title>
          ,
          <source>IEEE Transactions on Learning Technologies</source>
          <volume>15</volume>
          (
          <year>2022</year>
          )
          <fpage>685</fpage>
          -
          <lpage>696</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Heradio</surname>
          </string-name>
          , L.
          <string-name>
            <surname>De La Torre</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Galan</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          <string-name>
            <surname>Cabrerizo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Herrera-Viedma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Dormido</surname>
          </string-name>
          ,
          <article-title>Virtual and remote labs in education: A bibliometric analysis</article-title>
          ,
          <source>Computers &amp; Education</source>
          <volume>98</volume>
          (
          <year>2016</year>
          )
          <fpage>14</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Heradio</surname>
          </string-name>
          , L. de la Torre, S. Dormido,
          <article-title>Virtual and remote labs in control education: A survey</article-title>
          ,
          <source>Annual Reviews in Control</source>
          <volume>42</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Mamolo</surname>
          </string-name>
          ,
          <article-title>Development of digital interactive math comics (dimac) for senior high school students in general mathematics, Cogent Education 6 (</article-title>
          <year>2019</year>
          )
          <fpage>1689639</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Fuse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Okabe</surname>
          </string-name>
          ,
          <article-title>Computer ethics education using video and manga teaching materials: learning efects and the order of using teaching materials</article-title>
          ,
          <source>Transactions of Japanese Society for Information and Systems in Education 27</source>
          (
          <year>2010</year>
          )
          <fpage>327</fpage>
          -
          <lpage>336</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>García</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Amandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schiafino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Campo</surname>
          </string-name>
          ,
          <article-title>Evaluating bayesian networks' precision for detecting students' learning styles</article-title>
          ,
          <source>Computers &amp; Education</source>
          <volume>49</volume>
          (
          <year>2007</year>
          )
          <fpage>794</fpage>
          -
          <lpage>808</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakamoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shirai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Takemura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Orlosky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nagataki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ueda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Uranishi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Takemura</surname>
          </string-name>
          ,
          <article-title>Subjective dificulty estimation of educational comics using gaze features</article-title>
          ,
          <source>IEICE Transactions on Information and Systems</source>
          E106.D (
          <year>2023</year>
          )
          <fpage>1038</fpage>
          -
          <lpage>1048</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Charles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nixon</surname>
          </string-name>
          ,
          <article-title>Measuring mental workload using physiological measures: A systematic review</article-title>
          ,
          <source>Applied ergonomics 74</source>
          (
          <year>2019</year>
          )
          <fpage>221</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Marshall</surname>
          </string-name>
          ,
          <article-title>The index of cognitive activity: Measuring cognitive workload</article-title>
          ,
          <source>in: Proceedings of the IEEE 7th conference on Human Factors and Power Plants</source>
          , IEEE,
          <year>2002</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lima Sanches</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Augereau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kise</surname>
          </string-name>
          ,
          <article-title>Estimation of reading subjective understanding based on eye gaze analysis</article-title>
          ,
          <source>PLOS ONE 13</source>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Whitehill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Serpell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. C.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Foster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Movellan</surname>
          </string-name>
          ,
          <article-title>The faces of engagement: Automatic recognition of student engagement from facial expressions</article-title>
          ,
          <source>IEEE Transactions on Afective Computing</source>
          <volume>5</volume>
          (
          <year>2014</year>
          )
          <fpage>86</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Nakamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kakusho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Murakami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Minoh</surname>
          </string-name>
          ,
          <article-title>Estimating learners' subjective impressions of the dificulty of course materials by observing their faces in e-learning,</article-title>
          <source>The IEICE transactions on information and systems (D) 93</source>
          (
          <year>2010</year>
          )
          <fpage>568</fpage>
          -
          <lpage>578</lpage>
          (in Japanese).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>H.-G. Kim</surname>
            ,
            <given-names>E.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Cheon</surname>
            , D.-S. Bai,
            <given-names>Y. H.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>B.- H.</given-names>
          </string-name>
          <string-name>
            <surname>Koo</surname>
          </string-name>
          ,
          <article-title>Stress and heart rate variability: a metaanalysis and review of the literature</article-title>
          ,
          <source>Psychiatry investigation 15</source>
          (
          <year>2018</year>
          )
          <fpage>235</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Meshkati</surname>
          </string-name>
          ,
          <article-title>Heart rate variability and mental workload assessment</article-title>
          , in: Advances in psychology, volume
          <volume>52</volume>
          ,
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          ,
          <year>1988</year>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Delliaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Delaforge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-C.</given-names>
            <surname>Deharo</surname>
          </string-name>
          , G. Chaumet,
          <article-title>Mental workload alters heart rate variability, lowering non-linear dynamics</article-title>
          ,
          <source>Frontiers in physiology 10</source>
          (
          <year>2019</year>
          )
          <fpage>565</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Laborde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mosley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Thayer</surname>
          </string-name>
          ,
          <article-title>Heart rate variability and cardiac vagal tone in psychophysiological research-recommendations for experiment planning, data analysis, and data reporting</article-title>
          ,
          <source>Frontiers in psychology 8</source>
          (
          <year>2017</year>
          )
          <fpage>213</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Castro-Meneses</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-L.</given-names>
            <surname>Kruger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Doherty</surname>
          </string-name>
          ,
          <article-title>Validating theta power as an objective measure of cognitive load in educational video</article-title>
          ,
          <source>Educational Technology Research and Development</source>
          <volume>68</volume>
          (
          <year>2020</year>
          )
          <fpage>181</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Baceviciute</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mottelson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Terkildsen</surname>
          </string-name>
          , G. Makransky,
          <article-title>Investigating representation of text and audio in educational vr using learning outcomes and eeg</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI conference on human factors in computing systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>G.</given-names>
            <surname>Makransky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Terkildsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Mayer</surname>
          </string-name>
          ,
          <article-title>Adding immersive virtual reality to a science lab simulation causes more presence but less learning</article-title>
          ,
          <source>Learning and instruction 60</source>
          (
          <year>2019</year>
          )
          <fpage>225</fpage>
          -
          <lpage>236</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kawamoto</surname>
          </string-name>
          , The Manga Guide to Immunology, Ohmsha,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>P.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Tien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Desbrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pighin</surname>
          </string-name>
          ,
          <article-title>Learning controls for blend shape based realistic facial animation</article-title>
          ,
          <source>in: ACM Siggraph 2006 Courses</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>es</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Dollack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kiyokawa</surname>
          </string-name>
          , H. Liu, M. PerusquiaHernandez, C. Raman,
          <string-name>
            <given-names>H.</given-names>
            <surname>Uchiyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <article-title>Ensemble Learning to Assess Dynamics of Afective Experience Ratings and Physiological Change</article-title>
          ,
          <source>in: 2023 11th International Conference on Afective Computing and Intelligent Interaction Workshops and Demos (ACIIW)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>D.</given-names>
            <surname>Makowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. J.</given-names>
            <surname>Lau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Brammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lespinasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schölzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H. A.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>