<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preliminary steps towards detection of proactive and reactive control states during learning with fNIRS brain signals ?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alicia Howell-Munson</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deniz Sonmez Un</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erin W</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arrington</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erin Solov</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lehigh University</institution>
          ,
          <addr-line>Bethlehem PA 18015</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Pittsburgh University</institution>
          ,
          <addr-line>Pittsburgh PA 15260</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Worcester Polytechnic Institute</institution>
          ,
          <addr-line>Worcester MA 01609</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes a two-pronged approach to creating a multimodal intelligent tutoring system (ITS) that leverages neural data to inform the system about the student's cognitive state. The ultimate goal is to use fNIRS brain imaging to distinguish between proactive and reactive control states during the use of a real-world learning environment. These states have direct relevance to learning and have been difcult to identify through typical data streams in ITSs. As a rst step towards identifying these states in the brain and understanding their e ects on learning, we describe two preliminary studies: (1) we distinguished proactive and reactive control using fNIRS brain imaging in a controlled continuous performance task and (2) we prompted students to engage in either proactive or reactive control while using an ITS to understand how the two modes a ect learning progress. We propose integrating the fNIRS datastream with the ITS to create a multimodal system for detecting the user's cognitive state and adapting the environment to promote better learning strategies.</p>
      </abstract>
      <kwd-group>
        <kwd>Intelligent tutoring systems</kwd>
        <kwd>functional near-infrared spectroscopy</kwd>
        <kwd>brain-computer interfaces</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Intelligent tutoring systems (ITS) allow students to get personalized assistance
by collecting valuable information from their actions while learning. This
information includes how the students navigate through the system, the correctness
of their responses and the materials they struggle with [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. This data is retrieved
from the logs that are generated when students interact with the system;
however, these systems are not able to receive any input when students pause and do
not produce log events. The states that students experience at these times may
be indicative of bene cial behaviors such as self-monitoring and self-re ection or
harmful behaviors such as mind wandering or going o -task [
        <xref ref-type="bibr" rid="ref19 ref6">19, 6</xref>
        ]. We propose
to introduce neural data as an additional input source for an ITS to ll in the
gaps of data during pauses.
      </p>
      <p>
        Long pauses in ITS log data contain a rich and complex set of possible
cognitive activities that may or may not support the learning objectives of the
ITS task. De ning these cognitive states is important for building a clear model
of the students' behavior. A key underlying process is the nature of cognitive
control during and immediately after these pauses. Cognitive control describes
the set of processes that coordinate thoughts and guide actions in support of
goal-directed behavior. Prior work investigated this kind of behavior in various
ways within ITS research from promoting self-regulated learning strategies [
        <xref ref-type="bibr" rid="ref2 ref5">2,
5</xref>
        ] to detecting and intervening to students zoning out [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Even though these
behaviors have been heavily studied, their underlying mechanism of cognitive
control has been less explored in this line of research.
      </p>
      <p>
        Our research is built on the dual mechanism of control (DMC) framework
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which includes proactive and reactive control. Proactive control is the
maintenance of task-relevant information for sustain periods and elicits an early
selection for the goal. An example of proactive control is making the goal to run
errands after work and scheduling the rest of the day to accomplish this goal.
Reactive control is the late-correction for a goal when stimuli in the environment
triggers a just-in-time response. For reactive control, one would make the goal
to run errands after work and then by the end of the day, realize they still need
to run errands and are perhaps out of time now. The balance between proactive
and reactive control can shift based on multiple factors and we hypothesize that
it will in uence the e ciency and accuracy of goal-directed behavior in learning
environments. We explore this hypothesis more in Section 3.
      </p>
      <p>In this research, we bring a two-pronged approach to studying proactive and
reactive control modes in ITS use. First, we examined behavioral and neural
data from a simple continuous performance task that allows for identi cation of
periods of proactive and reactive control (Section 2). This line of work is aimed at
de ning ground truth states of proactive and reactive control based on behavioral
data and capturing neural signatures associated with each control mode. Second,
we explored the manipulation of proactive and reactive control states during ITS
use through instruction of strategy for participants and measured performance
on the task in di erent control states (Section 3). While the results presented
in this paper are preliminary, they provide a foundation for future multimodal
intelligent tutoring systems using brain data. Our long term goals are to measure
cognitive control states in real-world ITS use through behavioral and neural data.</p>
    </sec>
    <sec id="sec-2">
      <title>1.1 Multimodal approach to an intelligent tutoring system</title>
      <p>
        Several techniques have been adopted to understand student behavior and needs
in addition to analyzing log data from ITSs. Many use additional modalities
such as audio, video, and/or sensor data. Eye tracking has been one of the most
popular technologies within this line of research. It has been used to detect the
lapses in students' attention and reorient it on the learning activity [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], and to
predict both positive behaviors such as self-explanation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] or negative behaviors
such as mind-wandering [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. Other physiological sensors have been used to
identify student a ective states such as boredom, frustration, excitement, and
concentration [
        <xref ref-type="bibr" rid="ref28 ref4">28, 4</xref>
        ]. Researchers have also investigated collaborative processes
during learning using audio, video, and physiological measures [
        <xref ref-type="bibr" rid="ref12 ref21">21, 12</xref>
        ].
      </p>
      <p>
        Within this work, we are interested in bringing in brain data to disambiguate
the learner cognitive states in the pauses between logged events during ITS use.
Previous research has shown that functional magnetic resonance imaging (fMRI)
can be used to detect deep processing while problem solving [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Other studies
have shown electroencephalogram (EEG) can be used to predict student
performance [
        <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
        ], and also to understand student emotions and engagement [
        <xref ref-type="bibr" rid="ref16 ref25">16, 25</xref>
        ].
Based on prior work, brain sensing, collected through use of fMRI, can detect
more complex processing such as cognitive control states [
        <xref ref-type="bibr" rid="ref7 ref8">8, 7</xref>
        ]. Functional
nearinfrared spectroscopy (fNIRS) is a brain sensing technique that works similarly
to fMRI and has been proven to work well in typical human-computer
interaction environments [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. We argue that data collected through fNIRS combined
with the contextual information that log data provide will allow us to have a
deeper understanding of students' cognitive states.
1.2
      </p>
      <p>
        Functional near-infrared spectroscopy (fNIRS)
fNIRS is a brain-imaging tool that is safe, portable, easy to use, and quick to set
up. These characteristics have led to an increased adoption in human-computer
interaction research. It detects hemodynamic changes associated with neural
activity in the brain while performing tasks [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Because fNIRS enables brain
activity to be measured continuously during interactive computing tasks, it has
promise for understanding user experience with systems such as an ITS. The
fNIRS sensors are arranged on a mesh cap worn on the head and uses light to
detect the hemodynamic response, changes in blood oxygen over time resulting
in neural activity, from 1-3cm deep into the cortex [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The fNIRS signal reaches
its peak between 4 and 7 seconds after a stimulus. fNIRS has been shown to be
robust in typical human-computer interaction scenarios, including during typing,
mouse clicking [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], and verbalization [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Real-time fNIRS brain data has been
used to modulate interruptions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and enable attention-aware systems [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
2
      </p>
      <sec id="sec-2-1">
        <title>Preliminary study 1: cognitive control task</title>
        <p>The purpose of this study was to explore the use of fNIRS to identify neural
patterns of activation associated with proactive and reactive control in a simple
continuous performance task. fMRI work shows that proactive control is
associated with larger responses that establish goals and reactive control is associated
with larger responses to target stimuli. In the controlled task we used (Section
2.1), both modes of control may drive participant behavior and the shifts
between proactive and reactive control states can be identi ed by the types of
errors that occur on particular trials. Our approach was to use these behavioral
markers as a ground truth for proactive and reactive control. To assess whether
fNIRS could identify patterns associated with each control state, we examined
neural activity during the time leading up to a relevant behavioral marker.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2.1 Experimental task</title>
      <p>
        The AX-continuous performance task (AX-CPT) presents a series of letters,
as shown in Figure 2. Participants make a key press response for every letter.
The letters appear in cue probe pairs; the rst letter serves as a cue for the
second letter. The AX pair represents the target trial where a unique response
is made to the probe X, when it is preceded by cue A. All other cue probe pairs
represent non target trials and are responded to with a di erent key press. Thus,
for an appropriate response to the probe, the participants must maintain the cue
context (A or not A) throughout the inter-stimulus interval (ISI) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Responses made on non target trials are particularly important for identifying
reactive and proactive control. Both AY (A cue is followed by a probe letter
that is anything other than X) and BX (the cue is not an A, but the probe is
an X) trials represent non target trials where probe response errors represent
false alarms [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. AY errors suggest proactive control because the participant was
holding the A cue in mind and anticipating a target response, resulting in a
false alarm when the non target probe appeared. BX errors suggest reactive
control because the participant reacts to the X probe with the most common
response to that probe even though it is not relevant in the B cue [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. We will use
these behavioral markers to examine fNIRS signals in preceding time windows
to identify brain activity that is indicative of proactive and reactive control.
Fig. 1: Example trial line for the AX-CPT paradigm. A single cross appeared for
500 ms before the cue, which appeared for 1000 ms. This was followed by three
crosses during the inter-stimulus interval of 2000 or 6000 ms before the probe,
which appeared for 500 ms. Participants had 1000ms to respond to the cue and
probe. An AY non target, BX non target, and AX target trial are shown.
      </p>
    </sec>
    <sec id="sec-4">
      <title>2.2 Procedure</title>
      <p>
        We recruited 23 participants (11 male) between 18 and 23 years old (M = 19.5,
SD = 1.4). Two participants' data was removed due to large amounts of noise
across more than half of the fNIRS sensors. Participants were compensated with
either coursework credit or payment of $15.00. The experiment was performed
in a controlled laboratory environment with minimal distractions. The fNIRS
signals were recorded using a NIRx NIRSport2 fNIRS device with a sampling
rate of 8.7Hz [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. The device was con gured with a 21 channel design of the
prefrontal cortex using eight sensors and eight detectors (Fig. 2). Each sensor
and detector were approximately 3 cm apart, which allows measurement of 2-3
cm deep into an adult brain cortex [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>After reviewing and signing the informed consent, the fNIRS cap was placed
on the participant's head. Participants gave verbal con rmation that the cap
was comfortable before proceeding. Participants began the full task once they
got each trial type correct two times in a row in a practice AX-CPT block.
Participants verbally told the researcher the AX-CPT rules to ensure that each
participant understood the AX-CPT task. Each participant saw a total of 320
cue-probe paired trials in the following amounts: 96 AX trials, 64 AY, BX, and
BY trials, and 32 \catch" trials. Catch trials did not require a response to the
probe and encourage proactive control. Between the cue and probe, half of the
trials had ISI of 2000ms and half had 6000ms. ISI timing was randomized and
the order was di erent for each participant.
2.3</p>
    </sec>
    <sec id="sec-5">
      <title>Brain data results</title>
      <p>The brain signals were divided into windows beginning one second before cue
onset and lasting 18 seconds to allow the 4-7 second hemodynamic response
from both cue and probe to peak. All analyses were conducted on change in
oxygenated hemoglobin levels from an average of the rst 8 frames of the window.
To identify sensor locations in the brain that are relevant to the AX-CPT task, we
took the average signal across all trials for each sensor and selected those sensors
that showed a clear peak in the time period` following both cue and probe. To
further consider the di erences between proactive and reactive control, we used
the key behavioral markers of error trials on AY and BX trials, respectively.
Our data set contained 48 AY errors and 102 BX errors across all participants.
We selected trials within a ve trial window leading up to each error response
that also had a 6 second ISI where the temporal separation between cue and
probe allowed ample time for the hemodynamic response function to peak. We
averaged the signal for trials leading up to AY and BX errors and analyzed the
di erence between the trial types.</p>
      <p>
        Trials leading up to an AY error should show a peak following the cue, but
not the probe; whereas trials leading up to a BX error should show a peak to
the probe, but not the cue. Our initial results show that one sensor located
in the right dorsolateral prefrontal cortex (rDLPFC) had this pattern,
distinguishing between the two trials types (Figure 2). These patterns align with
prior block-wise fMRI data patterns [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] associated with conditions conducive to
proactive and reactive control. Here, we demonstrate that these states can also
be distinguished in neural data from a string of trials within the same block
where uctuations between proactive and reactive control state occur naturally.
By successfully identifying this dissociation between the two states with fNIRS
data, future work can use these states to inform decisions made by an ITS.
      </p>
      <p>By reducing and expanding the number of trials leading up to an AY or
BX error response, we could de ne the time prior to an error response that
the participant is in a particular cognitive control mode. Additionally, after
ascertaining the period in which a cognitive control state is most evident, we
will con rm that the dissociation between the two cognitive states seen in the
rDLPFC is signi cant through mixed-e ects linear modeling.</p>
      <sec id="sec-5-1">
        <title>Preliminary study 2: intelligent tutoring system</title>
        <p>
          In Section 2, we showed that we could identify proactive and reactive control
using fNIRS in an abstract task. Within this section we aim to understand
if proactive and reactive control can be induced during problem solving and
if using one of these modes of cognitive control a ects learning. In order to
achieve this goal, we designed an experiment using ASSISTments [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] without
integrating the neural data at this time. ASSISTments is an online tutoring tool
that allows students to get help and feedback while also providing assessment
data to teachers. ASSISTments provides a exible environment to create problem
sets with prede ned hints and feedback for students.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>3.1 Task Design</title>
      <p>
        We created a problem set that consists of nine probability problems. The
problems were covering three topics (basic probability, addition rule for probability of
non-mutually exclusive events, and multiplication rule for probability of
dependent events). All of the problems were divided into three to four substeps in order
to create a goal maintenance scenario similar to [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. When the participant was
presented with a problem for the rst time, they saw the full problem; however,
they were not expected to solve it at rst. The participant was only asked to rate
their con dence level in solving the particular problem. After rating their con
dence, the participant sees the rst step to solve the problem. After completing
each step, participants were given the next step until they reached the last one
which would lead them to the solution of the problem. Within this scenario, one
can think of the initial step as the time that the participant extracts the goal of
the problem. How participants solved the substeps allows us to observe if they
could maintain that goal in presence of other cognitively demanding events.
      </p>
      <p>
        Participants were randomly assigned to one of the two cognitive control
modes. With these conditions, we manipulated how participants approached the
substeps of the problems. In the proactive condition, participants were prompted
to think about how the substep they were solving was related to the goal of the
full problem that they saw. In the reactive condition, the prompt in the substep
instructed the participant to focus on the current substep. With these prompts,
we hypothesized that proactive or reactive control [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] can be induced within
a learning task as participants in the proactive condition would be practicing
active goal maintenance while the participants in the reactive condition would
only focus on the current step. This exact experiment design was used in an
earlier thinkaloud study [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. We replicated this experiment design excluding the
thinkaloud protocol to get more accurate behavioral data.
3.2
      </p>
    </sec>
    <sec id="sec-7">
      <title>Procedure</title>
      <p>We recruited 29 participants (5 male) between 18 and 22 years old (M = 19.8, SD
= 1.13). The participants were recruited through emails that are sent to student
mailing lists and announcements on online student bullet-in boards in a
university in the Northeastern US. The inclusion criterion was not having completed
more than two university level math courses. After providing informed consent,
participants were introduced to ASSISTments and given time to practice
answering problems. Then, they took a pre-test that had six probability problems
on ASSISTments. After the pre-test, they solved another practice problem to
get used to step by step problem solving and using proactive or reactive control
based on the condition they were assigned to. After the practice, they engaged
in problem solving activity while also using proactive or reactive control as the
way they were shown during the practice. Participants took a post-test that is
isomorphic to the pre-test after solving problems on ASSISTments. Participants
answered a demographic questionnaire at the end of the study. This study was
run completely online due to the COVID-19 outbreak. We communicated with
the participants using Zoom. We asked participants to turn their camera o to
protect their identities. We asked them to share their screen to be able to watch
and record their actions during the study. 1 participant was excluded as they
scored 100% on the pre-test.
3.3</p>
    </sec>
    <sec id="sec-8">
      <title>Results</title>
      <p>
        Prior work [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] has shown success in inducing proactive and reactive control
during problem solving when using thinkalouds. They found participants in the
proactive condition made signi cantly more statements that include the goal of
the problems. Without the thinkaloud protocol, one indicator that the
participants behaved as expected is the average time spent on the problem steps. We
hypothesized that the participants in the proactive condition should spend more
time on the problem steps because they need to think about how the current
problem step they are solving helps them reach the goal of the main problem.
Figure 3 pictures the average time spent on the problem steps by condition for
both studies. We conducted a two sample t-test to determine if participants in
the proactive condition spent more time on the problem steps because of their
re ection. Results supported our hypothesis. We found that the participants
in the proactive condition spent signi cantly more time on the problem steps
(t(24:05) = 2:36; p &lt; :05). We con rmed that this manipulation of cognitive
control was still e ective without the thinkaloud protocol. In order to see if
using one mode of cognitive control had any e ects on learning, we conducted a
repeated measures ANOVA with condition as the between-group variable and
the tests as the within-group variable. We found the main e ect of test was
signi cant (F (1; 26) = 51:33; p &lt; :001) meaning that participants improved from
pre to post test. However, results showed no signi cant interaction between the
condition and test (F (1; 26) = 1:45; p &gt; :05) indicating no signi cant di erence
in learning gain (post - pre) between proactive (M = 0:24; SD = 0:23) and
reactive (M = 0:33; SD = 0:19) modes.
4
      </p>
      <sec id="sec-8-1">
        <title>Discussion</title>
        <p>
          We have presented preliminary results on a two-pronged approach that is
attempting to provide ground truth data on di erent modes of cognitive control
based on the DMC framework [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and induce these modes in a realistic learning
environment for potential bene ts during learning. We rst showed that we can
identify proactive and reactive modes of cognitive control using fNIRS with a
similar methodology as [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], who identi ed these states with fMRI. We identi ed
the rDLPFC as a region of interest in detecting these cognitive states. Further
analysis is needed to understand if there are additional regions of interest that
can detect only one of the cognitive control modes. In theory, if there is a region
that is highly active in the brain for only proactive control, then it can be used
as a con rmation when activation for proactive control is seen in the rDLPFC.
        </p>
        <p>
          Next, we showed that those two modes can be induced in students during
more realistic ITS use replicating the experiment design described in [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. We
showed that the cognitive control manipulation was still successful without
having participants think out loud. We detected no signi cant di erences in the
preand post-tests between students who engaged in proactive and reactive control.
This could be due to the order and di culty levels of the presented problems.
Participants may have improved because they had a better understanding of
the problem types after seeing each three times, and that may be why we did
not see any di erence between proactive and reactive control. Using proactive
or reactive control can still be bene cial in di erent levels of challenge. We will
modify the experimental design to test it once more.
        </p>
        <p>We propose integrating the fNIRS datastream with the ITS to create a
multimodal system for detecting the user's cognitive state and adapting the
environment to promote better learning strategies. Our approach can impact a broad
range of ITSs, which are collectively used by thousands of students. Interventions
within tutoring systems can meet an individual student's needs better through
modeling cognitive states and their underlying mechanisms. To do so, we will
integrate fNIRS into the ITS task that promotes proactive and reactive control
in participants. This would con rm that the neural regions of interest identi ed
in the AX-CPT transfers to using an ITS.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Afergan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hincks</surname>
            ,
            <given-names>S.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shibata</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jacob</surname>
          </string-name>
          , R.J.:
          <article-title>Phylter: a system for modulating noti cations in wearables using physiological sensing</article-title>
          .
          <source>In: International conference on augmented cognition</source>
          . pp.
          <volume>167</volume>
          {
          <fpage>177</fpage>
          . Springer (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aleven</surname>
            ,
            <given-names>V.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koedinger</surname>
            ,
            <given-names>K.R.:</given-names>
          </string-name>
          <article-title>An e ective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor</article-title>
          .
          <source>Cognitive science 26</source>
          (
          <issue>2</issue>
          ),
          <volume>147</volume>
          {
          <fpage>179</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Betts</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferris</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fincham</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          :
          <article-title>Cognitive and metacognitive activity in mathematical problem solving: prefrontal and parietal patterns</article-title>
          .
          <source>Cognitive, A ective, &amp; Behavioral Neuroscience</source>
          <volume>11</volume>
          (
          <issue>1</issue>
          ),
          <volume>52</volume>
          {
          <fpage>67</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Arroyo</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>D.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burleson</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woolf</surname>
            ,
            <given-names>B.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muldner</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Christopherson</surname>
          </string-name>
          , R.:
          <article-title>Emotion sensors go to school</article-title>
          . In: AIED. vol.
          <volume>200</volume>
          , pp.
          <volume>17</volume>
          {
          <fpage>24</fpage>
          .
          <string-name>
            <surname>Citeseer</surname>
          </string-name>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Azevedo</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hadwin</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          :
          <article-title>Sca olding self-regulated learning and metacognition{ implications for the design of computer-based sca olds (</article-title>
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bixler</surname>
          </string-name>
          , R.,
          <string-name>
            <surname>D'Mello</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Toward fully automated person-independent detection of mind wandering. In: User modeling, adaptation, and personalization</article-title>
          . pp.
          <volume>37</volume>
          {
          <fpage>48</fpage>
          . Springer, Cham (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Braver</surname>
            ,
            <given-names>T.S.:</given-names>
          </string-name>
          <article-title>The variable nature of cognitive control: a dual mechanisms framework</article-title>
          .
          <source>Trends in cognitive sciences 16(2)</source>
          ,
          <volume>106</volume>
          {
          <fpage>113</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Braver</surname>
            ,
            <given-names>T.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paxton</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Locke</surname>
            ,
            <given-names>H.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barch</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          :
          <article-title>Flexible neural mechanisms of cognitive control within human prefrontal cortex</article-title>
          .
          <source>PNAS</source>
          <volume>106</volume>
          (
          <issue>18</issue>
          ),
          <volume>7351</volume>
          {
          <fpage>7356</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Chaouachi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heraz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jraidi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frasson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>In uence of dominant electrical brainwaves on learning performance</article-title>
          . In: E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. pp.
          <volume>2448</volume>
          {
          <fpage>2454</fpage>
          .
          <article-title>Association for the Advancement of Computing in Education (AACE) (</article-title>
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Conati</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Merten</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Eye-tracking for user modeling in exploratory learning environments: An empirical evaluation</article-title>
          .
          <source>Knowledge-Based Systems 20(6)</source>
          ,
          <volume>557</volume>
          {
          <fpage>574</fpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>D'Mello</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olney</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hays</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Gaze tutor: A gaze-reactive intelligent tutoring system</article-title>
          .
          <source>IJHCS</source>
          <volume>70</volume>
          (
          <issue>5</issue>
          ),
          <volume>377</volume>
          {
          <fpage>398</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ezen-Can</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grafsgaard</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lester</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boyer</surname>
            ,
            <given-names>K.E.</given-names>
          </string-name>
          :
          <article-title>Classifying student dialogue acts with multimodal learning analytics</article-title>
          .
          <source>In: Proceedings of the Fifth International Conference on Learning Analytics and Knowledge</source>
          . pp.
          <volume>280</volume>
          {
          <issue>289</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Gonthier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>MAcnamara</surname>
            ,
            <given-names>B.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chow</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conway</surname>
            ,
            <given-names>A.R.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braver</surname>
            ,
            <given-names>T.S.:</given-names>
          </string-name>
          <article-title>Inducing proactive control shifts in the AX-CPT</article-title>
          .
          <article-title>Frontiers in psychology 7(</article-title>
          <year>1822</year>
          ) (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. He ernan, N.T., He ernan, C.L.:
          <article-title>The assistments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching</article-title>
          .
          <source>International Journal of Arti cial Intelligence in Education</source>
          <volume>24</volume>
          (
          <issue>4</issue>
          ),
          <volume>470</volume>
          {
          <fpage>497</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Heraz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frasson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Predicting learner answers correctness through brainwaves assesment and emotional dimensions</article-title>
          .
          <source>In: AIED</source>
          . pp.
          <volume>49</volume>
          {
          <issue>56</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Heraz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frasson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Towards a brain-sensitive intelligent tutoring system: detecting emotions from brainwaves</article-title>
          .
          <source>Advances in Arti cial Intelligence</source>
          <year>2011</year>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Hutt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krasich</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brockmole</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>D'Mello</surname>
            ,
            <given-names>S.K.</given-names>
          </string-name>
          :
          <article-title>Breaking out of the lab: Mitigating mind wandering with gaze-based attention-aware technology in classrooms</article-title>
          .
          <source>In: Proc. of ACM CHI'21 Conference</source>
          . ACM (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Hutt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mills</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosch</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krasich</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brockmole</surname>
          </string-name>
          , J., D'mello, S.:
          <article-title>\out of the fr-eye-ing pan" towards gaze-based models of attention during learning with technology in the classroom</article-title>
          .
          <source>In: Proc. User Modeling, Adaptation and Personalization</source>
          . pp.
          <volume>94</volume>
          {
          <issue>103</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kalyuga</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Enhancing instructional e ciency of interactive e-learning environments: a cognitive load perspective</article-title>
          .
          <source>Educational Psychology Review (19)</source>
          ,
          <volume>387</volume>
          {
          <fpage>399</fpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>LLC</surname>
          </string-name>
          , N.M.T.:
          <article-title>Nirsport user manual</article-title>
          (
          <year>Sept 2015</year>
          ), https://support.nirx.de/wpcontent/uploads/2016/06/NIRSport88-UserManual-2015-09-R30-inkl.- Baseplate.pdf
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Malmberg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Jarvela,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Holappa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Haataja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>Siipo</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Going beyond what is visible: What multichannel data can reveal about interaction in the context of collaborative learning?</article-title>
          <source>Computers in Human Behavior</source>
          <volume>96</volume>
          ,
          <issue>235</issue>
          {
          <fpage>245</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Peck</surname>
            ,
            <given-names>E.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carlin</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jacob</surname>
          </string-name>
          , R.:
          <article-title>Designing brain-computer interfaces for attention-aware systems</article-title>
          .
          <source>Computer</source>
          <volume>48</volume>
          (
          <issue>10</issue>
          ),
          <volume>34</volume>
          {
          <fpage>42</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Pike</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maior</surname>
            ,
            <given-names>H.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porcheron</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharples</surname>
            ,
            <given-names>S.C.</given-names>
          </string-name>
          , Wilson,
          <string-name>
            <surname>M.L.</surname>
          </string-name>
          :
          <article-title>Measuring the e ect of think aloud protocols on workload using fnirs</article-title>
          .
          <source>In: Proceedings of the SIGCHI conference on human factors in computing systems</source>
          . pp.
          <volume>3807</volume>
          {
          <fpage>3816</fpage>
          . CHI '14,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Solovey</surname>
            ,
            <given-names>E.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Girouard</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chauncey</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hirsh</surname>
            <given-names>eld</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>L.M.</given-names>
            ,
            <surname>Sassaroli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Fantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Jacob</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.J.:</surname>
          </string-name>
          <article-title>Using fnirs brain sensing in realistic hci settings: experiments and guidelines</article-title>
          .
          <source>In: Proc. ACM symposium on User interface software and technology</source>
          . pp.
          <volume>157</volume>
          {
          <fpage>166</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Stevens</surname>
            ,
            <given-names>R.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galloway</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berka</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Eeg-related changes in cognitive workload, engagement and distraction as students acquire problem solving skills</article-title>
          .
          <source>In: International conference on user modeling</source>
          . pp.
          <volume>187</volume>
          {
          <fpage>196</fpage>
          . Springer (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Unal</surname>
            ,
            <given-names>D.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arrington</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Solovey</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Walker</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Using thinkalouds to understand rule learning and cognitive control mechanisms within an intelligent tutoring system</article-title>
          .
          <source>In: AIED</source>
          . pp.
          <volume>500</volume>
          {
          <fpage>511</fpage>
          . Springer (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>VanLehn</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>The behavior of tutoring systems</article-title>
          .
          <source>International journal of arti cial intelligence in education 16(3)</source>
          ,
          <volume>227</volume>
          {
          <fpage>265</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Woolf</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burleson</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arroyo</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dragon</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Picard</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A ectaware tutors: recognising and responding to student a ect</article-title>
          .
          <source>International Journal of Learning Technology</source>
          <volume>4</volume>
          (
          <issue>3-4</issue>
          ),
          <volume>129</volume>
          {
          <fpage>164</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>