<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Analysing program source code reading skills with eye tracking technology</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Vilius Turenko, Simonas Baltulionis, Mindaugas Vasiljevas, Robertas Damaševičius Department of Software Engineering Kaunas University of Technology</institution>
          ,
          <addr-line>Kaunas</addr-line>
          ,
          <country country="LT">Lithuania</country>
        </aff>
      </contrib-group>
      <fpage>33</fpage>
      <lpage>37</lpage>
      <abstract>
        <p>-Many areas of software engineering require good program code reading skills. We analyse the process of program reading using gaze tracking technology. We performed a study with six subjects, who performed four code reading tasks. The errors the embedded into program sources code and the lines of code with the areas were analysed as Areas of Interest (AoI). We formulated a research hypothesis and tested it using a one-way analysis of variance (ANOVA) test. The results of the study confirmed our research hypothesis that the number of fixations on AoI is larger than the number of fixations on other areas.</p>
      </abstract>
      <kwd-group>
        <kwd>program comprehension</kwd>
        <kwd>code reading</kwd>
        <kwd>eye tracking</kwd>
        <kwd>gaze tracking</kwd>
        <kwd>human-centered computing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Program code reading skills are important in many areas
of software engineering, especially, in adopting good code
writing practices and techniques, understanding how
programs work, identifying cases of poor programming style
and bad design, and delivering effective software
maintenance. Examples include program tracing and
searching for bugs, code smells and design anti-patterns [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
As automatic methods for finding bugs and poor coding
practices are still not very effective [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], source code reading
and analysis by human experts remain as relevant as ever.
Program comprehension is a crucial part of computer science
education, providing an important part of an understanding of
complexity of information technology (IT) systems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The
interest on applying gaze tracking in the context of multimedia
supported learning is on the rise [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Gaze data had been
successfully applied to analyze changes in cognitive load
during assimilation of learning materials and are starting to be
incorporated into adaptive e-Learning systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However,
currently there are no effective strategies in evaluating code
reading skills and assessing program comprehension.
Recently, eye tracking and was proposed as a viable research
instrument for evaluating source code reading [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The
outcomes of gaze tracking studies are especially relevant in
the context of Evidence-based Software Engineering (EBSE)
in order to provide detailed insights regarding different
practices in software engineering [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Eye movements are directly related to cognitive and
information processing processes, and through these
processes, visual information is used to stimulate the brain and
to understand the given task. There are two assumptions
related to cognitive processes and fixations: 1) if a person is
seeing an object (such as a word), he/she tries to understand
it; 2) a person fixates his/her gaze on an object until he/she
understands it. A fixation is an aggregation of gaze points
based on a specified area and time span. An Area of Interest
(AoI) is a part of a visual stimulus that is of special importance
Other important characteristics are a scan path, which is a
series of fixations that indicate the path and tendency of eye
movements, and a heat map, which identifies the focus of
© 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0)
visual attention [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. For example, Uwano et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] studied
graduate students conducting code reviews and discovered
that their gaze patterns followed a common scanpath, first
reading code top to bottom, and then rereading a few parts in
more depth. Chandrika et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] confirmed the positive
relationship of eye tracking traits over source code lines and
comments for code comprehension. Melo et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] analysed
how programmers debug code with embedded pre-processor
directives. Jbara and Feitelson [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] analysed how code
repeatability impacts the number of fixations in a predefined
area of interest (AOI), and the total fixation time. Beelder and
Plesis [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] analysed how the number and durations of
fixations are influenced by syntax highlighting. Yennigall et
al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] also used fixation counts and duration to analyse how
programming novices understood program code.
      </p>
      <p>In this paper, we describe the results of gaze tracking study
on evaluating and analysing the code reading skills of software
programmers, specifically focusing on the ability to find errors
in program code.</p>
    </sec>
    <sec id="sec-2">
      <title>II. METHODOLOGY</title>
      <sec id="sec-2-1">
        <title>A. Program reading tasks</title>
        <p>The study consisted of 4 tasks:
a. In Task 1, the aim was to read the program source code
with the aim of finding the result returned (printed) (Fig. 1).</p>
        <p>b. In Task 2, the aim was to identify the purpose of the
algorithm and discover the hidden error associated with the
incompatibility of the variable types (Fig. 2).</p>
        <p>c. In Task 3, the aim was to find three syntactic errors
related to the incorrect use of variable names, types and basic
methods (Fig. 3).</p>
        <p>d. In Task 4, the aim was to determine whether the
algorithm would perform the specified function, and to find a
hidden semantical error (Fig. 4).</p>
        <p>During gaze tracking we collect the number and location
of fixations, which are gaze points that are directed towards a
certain part of an image, which is labelled as Area of Interest
(AoI). Fixations are indications of visual attention. Here we
analyze the distribution of the number of fixations between
and out of AoIs. The eye movements between fixations are
known as saccades. However, we do not use the saccade data
in this study. A scan path is a directed path created by saccades
between eye fixations.</p>
      </sec>
      <sec id="sec-2-2">
        <title>C. Research hypotheses</title>
        <p>We assume that subjects are thinking about the object of
interest when they are looking directly at it. Based on this
assumption, we formulate the following research hypothesis:</p>
        <p>H1: The number of fixations on Areas of Interest is larger
than the number of fixations on other areas.</p>
      </sec>
      <sec id="sec-2-3">
        <title>D. Testing of hypotheses</title>
        <p>For testing of hypotheses we employ a statistical one-way
analysis of variance (ANOVA) test. The test, which is a
standard statistical method, confirms or rejects the equality of
the averages of two or more samples by examining the
variances of samples. ANOVA compares the variance
between the data samples to variance within each particular
sample. If the between-sample variance is much larger than
the within-sample variance variation, the average values of
different samples can not be equal.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. EXPERIMENTAL SETTING AND RESULTS</title>
      <sec id="sec-3-1">
        <title>A. Experimental settings</title>
        <p>Six participants (1 female and 5 male) were recruited for
this study, ages between 20 and 25 with an average of 22.8
years. All participants had normal or corrected-to-normal
vision. Participants were familiar with computers and had
previous experience in using the internet and all of them were
studying or working in programming sphere. An informed
consent was obtained from subjects before the study.</p>
        <p>All subjects were given the same laptop Dell which had an
additional monitor used for experiment and the Tobii Eye
Tracker 4C eye-tracking device used to record eye movements
and gaze fixations. The eye tracker uses infrared corneal
reflection to measure point of gaze with data rates of 90 Hz. A
24 inch screen was used to show the slides which consisted of
programming source code. The eye tracker using instructions
was mounted just below the visible screen area. The operating
distance between the eye tracker and subjects’ eyes was
between 70-75 cm. Efforts were made to ensure good lighting
and a device calibrated before the test. For each subject the
eye tracker was re-calibrated using an integrated 5-point
calibration to achieve most accurate results.</p>
        <p>Before the start of the experiment, the subjects were asked
to fill in the Google Form questionnaire before the start of the
study on their demographical characteristics (gender,
education, age, level of programming skills). All responses
were anonymized. After filling personal characteristics
subjects had a chance to read some common information
about tasks that they will face in this experiment, this way
subjects were informed about some important rules, for
example, no additional libraries or other extensions were used,
also that some tasks were bug free and some had hidden bugs,
the idea was to stimulate the subjects to be focused by not
telling what tasks had bugs and what were bug free. After
introducing tasks in common, the presentation with the slides
containing the source code of tasks was opened, the
observation session started at the start of each task and the
session was stopped after the task was completed, each task
had a separate observation session. 3 and 4 tasks had some
brief information about given algorithms, for example,
definition of palindrome and Armstrong’s number and
examples of each case. To complete each task, 90 seconds
were given. After the completion of each task, the participants
were asked to provide the answers in a Google Form on what
is the result of program execution (Task1), what is the purpose
of an algorithm (Task 2), and is the program correct (Task 3,
Task 4).
option to choose screen resolution manually, which will
allow to select concrete zones of interest.</p>
      </sec>
      <sec id="sec-3-2">
        <title>C. Results</title>
        <p>The results of participants (number of fixations) are
summarized according to tasks and subjects in Fig. 6.</p>
      </sec>
      <sec id="sec-3-3">
        <title>B. Experimental system</title>
        <p>Gaze monitoring system was used to measure the number
and duration of fixations in the Areas of Interest (AOIs). The
system consists of components listed below (see Fig. 5).




</p>
        <p>The Data Gathering Module reads the raw gaze data from
the eye tracker device via USB.</p>
        <p>The Data Preprocessing Module filters noise, calculates
additional metrics and characteristics like saccades.
The Data Persistence Module saves the acquired gaze data
to CSV, XML or database.</p>
        <p>The Data Post-processing Module maps persisted gaze
data to AOIs and calculates additional data features such
as the total and average number and duration of fixations.
The Configuration Module configures how data is
gathered and persisted in the system.</p>
        <p>System offers four types of data stream which are used
to gather fixations and saccades directly from gaze tracking
device.</p>
        <p> Unfiltered gaze
 Lightly filtered gaze
 Sensitive fixation
 Slow fixation</p>
        <p>For this experiment, sensitive fixation type was chosen
because of its accuracy and unnecessary noise reduction.</p>
        <p>In addition, the system is running in the background
and it has no effect on the stimulus, thus the subject's attention
is concentrated only to source code.</p>
        <p>Besides types of data stream, before starting gaze
tracking session, user has an option to choose to record his
screen, but for now it is only a prototype version, which needs
to be improved for better accuracy, also session can have
additional information about subject, for example name, age
and other description, if it is not necessary, user can select
anonymous session. In the near future, system will offer an</p>
        <p>An example of the gaze path generated from gaze tracking
data is presented in Fig. 7. The gaze path shows how and in
what sequence the subject has read the code. Note the order of
reading is clearly not linear.</p>
        <p>An example of the heatmap generated from gaze tracking
data is presented in Fig. 8. Note that most of attention was
focused on and around the Area of Interest centred on code
line 42 (see also Fig. 1).</p>
        <p>In Fig. 9, the average gaze fixation numbers for AoI and
Non-AoI areas is presented. We can see that for all tasks, the
number of fixations on AoIs was larger, although the
difference was not statistically significant for Task 2 (also see
the results of statistical testing using ANOVA in Table I).</p>
        <p>The results of statistical testing using ANOVA are
presented in Table I. We found statistically significant
differences in the number of fixations on the Areas of Interest
(AoI) vs non-AoI for Tasks 1, 3 and 4. However, we did not
find such differences for Task 2.</p>
      </sec>
      <sec id="sec-3-4">
        <title>D. Limitations and threats to validity</title>
        <p>
          The study is based on the assumption that humans think
about objects when look at them, however we cannot be sure
that is assumption is correct. Our eye-tracking experiment
only explores the processing of cognitive response to visual
stimulus without considering the quality of responses.
Moreover, due to a small sample of subjects and gender bias
(all participants were male) we could not analyse the gender
and affective differences, which have been noted as significant
in other gaze tracking studies [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. To minimize threats to
validity, the participants did not know about the hypothesis
formulated for the research. They only knew that they would
be in helping us to understand how program code is read and
understood.
        </p>
        <p>In three tasks of four performed we were able to confirm
our research hypothesis. In one, task the hypothesis could not
be confirmed. We think that we reason was in poor design of
the task, which we hope to improve in our further research.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. CONCLUSION</title>
      <p>We have presented a study aimed at comprehending how
programmers read and debug program code. Our results
indicate that gaze tracking can be used successfully to follow
and assess the cognitive behaviour of programmers as they
correctly identify the errors embedded into the source code.
The number of gaze fixations is a significant parameter when
assessing the level of attention attributed to a particular Area
of Interest.</p>
      <p>Future work will focus on the methodological
improvement of our research study and collection of a larger
dataset of data from more subjects.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Obaidellah</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          , Al Haek,
          <string-name>
            <surname>M.</surname>
          </string-name>
          , &amp; Cheng,
          <string-name>
            <surname>P. C.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>A survey on the usage of eye-tracking in computer programming</article-title>
          .
          <source>ACM Computing Surveys</source>
          ,
          <volume>51</volume>
          (
          <issue>1</issue>
          ) doi:10.1145/3145904
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suri</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Misra</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blažauskas</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Damaševičius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Software Code Smell Prediction Model Using Shannon, Rényi</article-title>
          and
          <string-name>
            <given-names>Tsallis</given-names>
            <surname>Entropies</surname>
          </string-name>
          . Entropy,
          <volume>20</volume>
          (
          <issue>5</issue>
          ), 372. doi:
          <volume>10</volume>
          .3390/e20050372
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Damaševičius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>On The Human, Organizational, and Technical Aspects of Software Development and Analysis</article-title>
          .
          <source>In Information Systems Development</source>
          (pp.
          <fpage>11</fpage>
          -
          <lpage>19</lpage>
          ). Springer US. doi:
          <volume>10</volume>
          .1007/b137171_
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Alemdag</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cagiltay</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>A systematic review of eye tracking research on multimedia learning</article-title>
          .
          <source>Computers and Education</source>
          ,
          <volume>125</volume>
          ,
          <fpage>413</fpage>
          -
          <lpage>428</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.compedu.
          <year>2018</year>
          .
          <volume>06</volume>
          .023
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Rosch</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Vogel-Walcutt</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>A review of eye-tracking applications as tools for training</article-title>
          .
          <source>Cognition, Technology and Work</source>
          ,
          <volume>15</volume>
          (
          <issue>3</issue>
          ),
          <fpage>313</fpage>
          -
          <lpage>327</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10111-012-0234-7
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Busjahn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schulte</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Busjahn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Analysis of code reading to gain more insight in program comprehension</article-title>
          .
          <source>In Proceedings of the 11th Koli Calling International Conference on Computing Education Research - Koli Calling '11</source>
          . ACM Press. doi:
          <volume>10</volume>
          .1145/2094131.2094133
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Sharafi</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soh</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Guéhéneuc</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>A systematic literature review on the usage of eye-tracking in software engineering</article-title>
          .
          <source>Information and Software Technology</source>
          ,
          <volume>67</volume>
          ,
          <fpage>79</fpage>
          -
          <lpage>107</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.infsof.
          <year>2015</year>
          .
          <volume>06</volume>
          .008
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Blascheck</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurzhals</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raschke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiskopf</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ertl</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Visualization of eye tracking data: A taxonomy and survey</article-title>
          .
          <source>Computer Graphics Forum</source>
          ,
          <volume>36</volume>
          (
          <issue>8</issue>
          ),
          <fpage>260</fpage>
          -
          <lpage>284</lpage>
          . doi:
          <volume>10</volume>
          .1111/cgf.13079
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Uwano</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakamura</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monden</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Matsumoto</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Analyzing individual performance of source code review using reviewers' eye movement</article-title>
          .
          <source>In Proceedings of the 2006 symposium on Eye tracking research &amp; applications - ETRA '06</source>
          . ACM Press. doi:
          <volume>10</volume>
          .1145/1117309.1117357
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Chandrika</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amudha</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sudarsan</surname>
            ,
            <given-names>S. D.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Recognizing eye tracking traits for source code review</article-title>
          .
          <source>In 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)</source>
          .
          <source>IEEE. doi:10</source>
          .1109/etfa.
          <year>2017</year>
          .8247637
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Melo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narcizo</surname>
            ,
            <given-names>F. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hansen</surname>
            ,
            <given-names>D. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brabrand</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wasowski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Variability through the Eyes of the Programmer</article-title>
          .
          <source>In 2017 IEEE/ACM 25th International Conference on Program Comprehension (ICPC)</source>
          . IEEE. https://doi.org/10.1109/icpc.
          <year>2017</year>
          .34
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Ahmad</given-names>
            <surname>Jbara</surname>
          </string-name>
          and
          <string-name>
            <given-names>Dror G.</given-names>
            <surname>Feitelson</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>How programmers read regular code: a controlled experiment using eye tracking</article-title>
          .
          <source>In Proceedings of the 2015 IEEE 23rd International Conference on Program Comprehension (ICPC '15)</source>
          . IEEE Press, Piscataway, NJ, USA,
          <fpage>244</fpage>
          -
          <lpage>254</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Beelders</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp; du
          <string-name>
            <surname>Plessis</surname>
            ,
            <given-names>J.-P.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>The Influence of Syntax Highlighting on Scanning and Reading Behaviour for Source Code</article-title>
          .
          <source>In Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists on - SAICSIT '16</source>
          . ACM Press. https://doi.org/10.1145/2987491.2987536
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Yenigalla</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharif</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Crosby</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>How Novices Read Source Code in Introductory Courses on Programming: An EyeTracking Experiment</article-title>
          .
          <source>In Lecture Notes in Computer Science</source>
          (pp.
          <fpage>120</fpage>
          -
          <lpage>131</lpage>
          ). Springer International Publishing. https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -39952-2_
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Ksiazeh</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marszalek</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Capizzi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Napoli</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polap</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wozniak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Faster image filtering via parallel programming</article-title>
          .
          <source>International Journal of Computer Science &amp; Applications</source>
          ,
          <volume>16</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>55</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Liaudanskaitė</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saulytė</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jakutavičius</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vaičiukynaitė</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zailskaitė-Jakštė</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Damaševičius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Analysis of affective and gender factors in image comprehension of visual advertisement</article-title>
          .
          <source>Artificial Intelligence and Algorithms in Intelligent Systems. CSOC2018. Advances in Intelligent Systems and Computing</source>
          , vol
          <volume>764</volume>
          . Springer, Cham,
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -91189-
          <issue>2</issue>
          _
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>