<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Estimation and Visualization of Webcam Eye Tracking for Text Reading</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Anastasiia</forename><surname>Grynenko</surname></persName>
							<email>anastasiia.hrynenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olena</forename><surname>Turuta</surname></persName>
							<email>olena.turuta@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ruslan</forename><surname>Kasheparov</surname></persName>
							<email>ruslan.kasheparov@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olga</forename><surname>Kalynychenko</surname></persName>
							<email>olga.kalynychenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Turuta</surname></persName>
							<email>oleksii.turuta@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<postCode>2024</postCode>
									<settlement>Cambridge</settlement>
									<region>MA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Estimation and Visualization of Webcam Eye Tracking for Text Reading</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C06C8BEB1DC207B1E3B9B5D635C8E59D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Eye-tracking, gaze detection, NLP, Corpus, Artificial Intelligence Ethics, Diagnostic Accuracy1 0000-0003-0263-3701 (A. Hrynenko)</term>
					<term>0000-0002-1089-3055 (O. Turuta)</term>
					<term>0000-0001-7526-7912 (O. Turuta)</term>
					<term>0000-0003-1466-3967 (O. Kalynychenko)</term>
					<term>0000-0002-0970-8617 (O. Turuta)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With increasingly popular eye-tracking research, we wondered if it was necessary to use special equipment -eye trackers -for such experiments. Webcams are inexpensive and everyone has them at home. But are they precise enough to be used in text reading experiments? We tested whether an ordinary webcam can determine where the reader's eyes are located when reading text from a screen. We found that a webcam can be used to detect a line at certain text sizes and line spacing. However, the webcam is not suitable for capturing individual words or letters. This requires more accurate eye trackers.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Eye-tracking is the measurement of eye movements and determination of the gaze, which is a location where the person is looking. Eye movement has been of interest to scientists since the 19th century. One of the first important milestones was the discovery by Louis Émile Javal that reading is not a smooth movement over a text, but a series of short stops (fixations) and quick jumps (saccades). Since the 1970s, research on gaze tracking in reading has gained popularity, especially the Rayner study <ref type="bibr" target="#b0">[1]</ref>. An important idea is the hypothesis put forward in 1980 by Just and Carpenter about the strong eye-mind, according to which what the gaze is fixed on is processed <ref type="bibr" target="#b1">[2]</ref>. There are also other hypotheses, in particular the immediacy hypothesis, according to which the eyes do not move until all processing is completed. Moreover, there are opposing ideas that eye movements do not really reflect moment-by-moment cognitive processing demands during reading. In any case, scientists agree that gaze tracking can provide important data about reading.</p><p>Initially eye movement was studied to determine its role in reading and language comprehension. Later psycholinguists began to use this technology to organize the process of language learning for different social groups, to improve cross-cultural communication in business, as well as for many theoretical questions about understanding how people perceive language. Eye-tracking is not only used in reading text, but also in listening to music, typing, and visual search. There are many applications of this technology <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Relevance</head><p>The technology is widely used in education, both in the form of student eye tracking (tasks to determine student engagement, academic performance, and skills assessment) and in the form of teacher eye tracking (professional eye tracking). In addition, there is an idea to explore the methodology of performing tasks with eye tracking with the possibility of turning them into educational material. Eye movement has also helped in the training of some professions, such as pilots, by developing methods for modeling complex realized events, detecting errors, and assessing consequences.</p><p>Marketing, human-computer interaction, and neuroergonomics are also well-known applications of eye tracking for market research, audience research, as an input device, and for adaptation in the workplace, respectively <ref type="bibr" target="#b3">[4]</ref>. There are ideas for using tracking data to improve NLP models, to evaluate them, and to explore the possibility of applying the models to other languages and problems <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>. There are also many other applications of gaze tracking systems, such as research on perception and cognitive processes, lie detection <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref>, and detection of deepfakes <ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref>.</p><p>Another interesting use of the system is to diagnose a person's mental state, which is especially relevant in times of full-scale war. An example of the realization of such an idea is the Ukrainian startup Anima <ref type="bibr" target="#b12">[13]</ref>. Indeed, many Ukrainians, especially the military, suffer from constant stress that puts a strain on their psychological state. It is important to constantly monitor and prevent possible deterioration of psychological health in order to understand the readiness of the military for combat. Gaze tracking systems can help with this, as a shifted distribution of attention to stimuli of different emotional colors may indicate certain deviations. This idea can be expanded to diagnose various other mental illnesses, as there are already publications describing ways to diagnose them by studying a person's gaze.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Eye-tracking features</head><p>Calibration is necessary to adapt the system algorithm before applying it to the person sitting in front of the eye-tracker. Some systems do not need calibration, but most still require it. Calibration is possible when the point the person is looking at is known. Mobile device calibrations vary, and their instructions are in the manuals. For a screen-based calibration, you need to focus your gaze on the targets shown on the screen. The more targets in different parts of the screen are analyzed, the more accurate the system will be, but this can be time consuming and inconvenient for some experiments.</p><p>In our experiment, we want to collect data that can be useful to different professionals interested in the application of eye tracking in their field, and to evaluate the quality and accuracy of the resulting system.</p><p>Since in our experiment we will not use special high-precision equipment that may not require calibration, and the format will be screen-based, we will use points from different parts of the screen for calibration, namely 3 rows of 3 points, with one point appearing in the center of the screen and the rest in the perimeter of the screen with 4 points in the corners.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Datasets</head><p>Eye-tracking reading experiments often use fixations (place, duration) and saccades (length, duration, start and end point) as captured primitives. For more detailed experiments, the number of fixations per 100 words, whether they are progressive or regressive, and the frequency of regressions are observed.</p><p>There are eye-tracking datasets with 2 different types of input data: the original video with the specified characteristics of the experiment and already processed format, where with the original parameters of the eyes the resultant point on the screen is indicated <ref type="bibr" target="#b13">[14]</ref>.</p><p>Two methods are used for text experiments: rapid serial visual presentation (RSVP), in which the words are presented at a set rate in the same location, and self-paced reading, in which only several words are presented on the screen at a time and readers advance the text by pushing a button. There are variations of the self-paced method. Some experiments include the ability to look back in the text. Some display only one word, while others display several at once. Sometimes the words are in the same place, and sometimes they are presented spatially, as in a normal text. There are also studies that consider a text as a time-series process <ref type="bibr" target="#b14">[15]</ref> or based on an attention mechanism <ref type="bibr" target="#b15">[16]</ref>.</p><p>As for textual datasets, they often consist of a single sentence and are monolingual, although there are also longer texts and multilingualism available, such as GECO <ref type="bibr" target="#b16">[17]</ref> and MECO <ref type="bibr" target="#b17">[18]</ref> datasets <ref type="bibr" target="#b18">[19]</ref>. It is also important to note that most of the existing datasets are in English. The Ukrainian language is considered to be a low resource language with very few existing datasets <ref type="bibr" target="#b19">[20]</ref>. Therefore, the text in Ukrainian will be used in our experiment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Hardware</head><p>Hardware differs by specialty, human interface, scope of tracking, and technical characteristics.</p><p>Special trackers refer to any additional gaze-detection hardware, when non-special is a standard webcam. The interface can be head-stabilized, remote, and mobile (also called "head-mounted"). In terms of tracking area, devices are differentiated into those that use a computer screen as the stimulus area; those that can operate in more complex geometries, such as a multi-screen booth; and those that can operate in the real world. Technical characteristics include accuracy, sampling rate, resolution, etc.</p><p>We divided trackers by human interface, additionally adding non-special devices (webcams). We get the following groups: head-stabilized, remote, mobile and non-special.</p><p>Head-stabilized eye tracking is usually much more accurate than other types of trackers, so they are used in neurophysiological experiments, where participants' comfort is less important than system accuracy and precision, as well as in experiments with animals. However, such trackers are uncomfortable and not designed for experiments where natural responses with head movements are important.</p><p>Remote systems consist of a camera and an infrared source. Most often such cameras are mounted under the screen, as the pupil is more visible from that position due to the shape of the eye and eyelids. Remote eye tracking is useful in experiments with infants and in natural interactions. Limitations for such trackers are a fixed working area beyond which gaze tracking is difficult, the tolerance of head movements affecting the inaccuracy of results, sunlight reflected in participants' eyes, and the problem of capturing more than one participant.</p><p>Mobile gaze tracking is often in the form of goggles that include a scene recording camera, gaze capture cameras, and illuminators. Such devices are a great option for real-world research. There is the problem of sunlight, but it is solvable. In addition there is a problem of the difficulty of tracking the gaze at the periphery and the use of a relative coordinate system that differs greatly from participant to participant.</p><p>Non-special equipment does not have infrared beams or a head stabilizer, so its effectiveness is orders of magnitude lower. However, a significant advantage is the popularity of web cameras.</p><p>As a result of comparing different hardware, it was concluded that greater accuracy is given by special devices, especially with a head lock. But since webcams are much more common and do not require additional costs, we would like to investigate camera accuracy. The camera used in the experiment has a frame rate of 30 FPS and a resolution of 1280x720.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Software</head><p>These days, there are various programs that allow eye-tracking, both with special eye trackers and with general webcams <ref type="bibr" target="#b20">[21]</ref>. Our focus was on those of them that can be applied to general webcams. A comparison of different eye-tracking software is shown in Table <ref type="table" target="#tab_0">1</ref>. Although some software can detect, for example, saccades, it can only be achieved using special equipment, because the frequency of the web-cameras is not sufficient for this.</p><p>Having compared and worked with the above-mentioned programs, it was decided to use WebGazer.js for the experiment. Although this software works relatively roughly and cannot capture saccades and fixations, due to its well-developed community and available documentation it is excellent for conducting small experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Data preparation</head><p>In preparation for an upcoming experiment, the task at hand involves the careful and meticulous preparation of data. To accomplish this, we have developed a specialized service designed to generate custom images with specific parameters. These parameters include the picture resolution, aimed at optimizing the display quality on the user's screen, as well as the desired text, font style, font size, spacing between words, spacing between sentences, distance to the screen border, and the desired position of the text on the screen.</p><p>By leveraging our innovative service, a seamless process is established for creating a high-quality PNG image that precisely adheres to the specified parameters. Additionally, a corresponding JSON file is generated, which contains metadata that comprehensively describes the various parameters of the image. This includes vital information such as the exact placement coordinates for each word within the image.</p><p>By providing this comprehensive and meticulously detailed output, our service ensures that the necessary data is readily available for the future experiment, enabling a smooth and accurate analysis of the acquired information.</p><p>The final generated image contains the text itself and additional borders. The word boundaries calculated by the program according to the text metadata are displayed in green.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> shows the generated final image, using the data taken from the metadata. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Experiment description</head><p>The primary objective of this paper is not to conduct a comprehensive study on eye movement during text-related tasks. Instead, its main focus is to introduce a system that can potentially facilitate such experiments. The paper includes an experiment that serves two purposes: firstly, to showcase the viability of employing an eye-tracking system, and secondly, to examine the feasibility of substituting costly eye trackers with a simpler webcam solution.</p><p>The goal of the experiment is to determine the extent to which webcams can be useful using WebGazer.js to determine the position in the text, where the participant focuses. For this purpose, Ukrainian text was taken, which was projected on a 14'' FullHD screen with different font sizes and line spacing. The result of the experiment was to select the optimal ratio of text parameters.</p><p>At the stage of preparation for the experiment 16 variants of text with Calibri font and font size from 24 pt to 32 pt with the step of 4 pt, and with the line spacing from 1.5 to 3 with the step of 0.5 were formed.</p><p>The experiment involved people with different eye colors without special features (e.g., glasses). Each participant had to wait for the camera to capture their face and then underwent calibration at 9 points after starting the app. During the calibration, each point had to be clicked 5 times while looking at it. After a successful calibration, the system's accuracy was calculated when looking at the center point and displayed on the screen. The system-determined point of view was displayed as a blue dot. The text parameters could be changed from the top menu bar. During the experiment, attention was paid to how accurately the place where the gaze was directed could be determined. For this purpose, hits in a word were examined.</p><p>Percentages of hits in the word was calculated using the following equation</p><formula xml:id="formula_0">𝑃 = 𝑁 ! 𝑁 " ,<label>(1)</label></formula><p>where 𝑁 ! is the fraction of hits, i.e. the number of hits on the container that contains the word, and 𝑁 " is the number of the number of times the container was missed calculated for 5 seconds of gaze. The hit or miss accuracy is evaluated at regular intervals of 10 milliseconds.</p><p>To activate the accuracy calculation function, we used the right mouse click at the place where the gaze is actually directed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">Heat maps</head><p>Heat maps in eye tracking play a crucial role in visualizing and analyzing gaze focus data. By utilizing color intensity, heat maps provide a graphical representation that indicates the level of eye attention in specific areas of an image or screen. This enables researchers and practitioners to gain insights into the patterns of visual exploration and focus during various tasks.</p><p>Real-time display of eye movements is particularly valuable in tasks that require immediate feedback or interaction. The inclusion of real-time heat maps allows for the visualization of the current gaze location on the screen, providing instant visual feedback to both the user and the experimenter.</p><p>One of the advantages of heat maps is their ability to accumulate information over time. By highlighting areas on the screen where the gaze was directed for longer durations with stronger and more noticeable colors, heat maps effectively capture the salient regions of interest. This feature aids in reducing noise or transient eye movements, as the emphasis is placed on areas that received sustained attention.</p><p>Heat maps can be employed in various domains, including usability testing, user experience research, website optimization, and advertising analysis. They provide a valuable tool for understanding visual attention patterns, optimizing user interfaces, and enhancing the overall user experience.</p><p>Figure <ref type="figure" target="#fig_1">2</ref> shows the heat map display. The image clearly demonstrates the informative nature of heat maps, as they vividly depict the visual information regarding the focus of attention.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10.">Post-processing of received data</head><p>During the experiment, the program collects important data about the user, in particular the coordinates of his gaze, namely the x and y coordinates and the corresponding timestamps. This data is then organized and stored in a structured format in a CSV (Comma-Separated Values) file. The collected data is then transmitted to a server for further processing and analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="11.">Results</head><p>Figure <ref type="figure" target="#fig_2">3</ref> shows the application during the accuracy assessment at a particular location.  Despite the expected linearity of the data, we got a different result. We can explain it by the inaccuracies of the program, lack of experiments and by the internal processes of the brain, which lead to uncontrolled eye movements.</p><p>According to the results of the experiment, a word can be caught with a webcam with a font size not less than 32 pt and a line spacing not less than 2.5 with an accuracy more than 84%, but to determine the letter that the participant is looking at, the webcam is not enough.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="12.">Conclusion</head><p>The results of the experiment showed that a webcam can be used to recognize a line that reads text. However, compared to the results that can be obtained using special equipment, webcams do not work accurately. The plan is to experiment with more accurate special trackers in order to separate the letter at which the gaze is directed with greater accuracy.</p><p>The experiment yielded a discovery regarding the feasibility of utilizing webcams for eye-tracking in the context of reading texts on general devices. This finding highlights a crucial advancement in the field, as it demonstrates the successful adaptation of commonly available webcams for capturing and analyzing eye movement data. The ability to utilize webcams for eye-tracking opens up new possibilities for researchers and practitioners, as it eliminates the need for specialized and expensive eye-tracking equipment. This not only expands the accessibility of eye-tracking technology but also presents opportunities for conducting eye-tracking studies on a larger scale and in diverse settings.</p><p>The implications of this finding extend beyond the scope of the experiment and have the potential to shape future research methodologies and applications in the field of eye-tracking.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The generated final image, using the data taken from the metadata The user perceives the image without the boundaries of individual words.</figDesc><graphic coords="5,92.00,-8.79,419.05,397.14" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Screenshot of the heat map</figDesc><graphic coords="6,77.75,395.78,450.97,163.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Screenshot of the application during the experiment</figDesc><graphic coords="7,83.98,37.05,435.06,231.37" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 Comparative characteristics of the software</head><label>1</label><figDesc></figDesc><table><row><cell>Name</cell><cell>Format</cell><cell>Programming</cell><cell>Detection objects</cell><cell>Accessibility</cell><cell></cell></row><row><cell></cell><cell></cell><cell>language</cell><cell></cell><cell></cell><cell></cell></row><row><cell>PyGaze</cell><cell>Toolbox for</cell><cell>Python</cell><cell>Coordinates on the</cell><cell>Open source</cell><cell></cell></row><row><cell></cell><cell>eye-</cell><cell></cell><cell>screen, position and</cell><cell></cell><cell></cell></row><row><cell></cell><cell>tracking</cell><cell></cell><cell>duration of fixations,</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>saccades</cell><cell></cell><cell></cell></row><row><cell>WebGazer.js</cell><cell>Web</cell><cell>JavaScript</cell><cell>Coordinates on the</cell><cell>Open source</cell><cell></cell></row><row><cell></cell><cell>application</cell><cell></cell><cell>screen</cell><cell></cell><cell></cell></row><row><cell cols="2">PyGazeAnalyser Toolbox for</cell><cell>Python</cell><cell>Position and duration of</cell><cell>Open source</cell><cell></cell></row><row><cell></cell><cell>analysis and</cell><cell></cell><cell>fixations, saccades, than</cell><cell></cell><cell></cell></row><row><cell></cell><cell>plotting</cell><cell></cell><cell>fixation map, scanpath</cell><cell></cell><cell></cell></row><row><cell></cell><cell>eye-</cell><cell></cell><cell>and heatmap</cell><cell></cell><cell></cell></row><row><cell></cell><cell>tracking</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>data</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>xLabs</cell><cell>Browser</cell><cell>JavaScript,</cell><cell>Coordinates on the</cell><cell>No</cell><cell>longer</cell></row><row><cell></cell><cell>extension</cell><cell>C++</cell><cell>screen</cell><cell>supported</cell><cell></cell></row><row><cell></cell><cell>for Google</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>Chrome</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>OGAMA</cell><cell>Windows</cell><cell>C#.NET</cell><cell>Coordinates on the</cell><cell cols="2">Open source, no</cell></row><row><cell></cell><cell>desktop</cell><cell></cell><cell>screen, position and</cell><cell cols="2">longer supported</cell></row><row><cell></cell><cell>application</cell><cell></cell><cell>duration of fixations,</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>saccades</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc></figDesc><table><row><cell>Experiment results</cell><cell></cell><cell></cell><cell></cell></row><row><cell>\ Font size</cell><cell>24</cell><cell>28</cell><cell>32</cell></row><row><cell>Line spacing</cell><cell></cell><cell></cell><cell></cell></row><row><cell>1.5</cell><cell>53</cell><cell>60</cell><cell>16</cell></row><row><cell>2.0</cell><cell>64</cell><cell>71</cell><cell>31</cell></row><row><cell>2.5</cell><cell>69</cell><cell>64</cell><cell>87</cell></row><row><cell>3.0</cell><cell>41</cell><cell>67</cell><cell>84</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This publication is based upon work from COST Action CA21131, supported by COST (European Cooperation in Science and Technology).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Eye movements in reading and information processing: 20 years of research</title>
		<author>
			<persName><forename type="first">K</forename><surname>Rayner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Bulletin</title>
		<imprint>
			<biblScope unit="volume">124</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="372" to="422" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A theory of reading: From eye fixations to comprehension</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Just</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Carpenter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">85</biblScope>
			<biblScope unit="page" from="109" to="130" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Pymovements: A python package for eye movement data processing</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Krakowczyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Reich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chwastek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Jakobi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prasse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Süss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">.</forename><surname>Jäger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename></persName>
		</author>
		<idno type="DOI">10.1145/3588015.3590134</idno>
	</analytic>
	<monogr>
		<title level="m">the Eye Tracking Research and Applications Symposium (ETRA)</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">The Neuroergonomics of Aircraft Cockpits: The Four Stages of Eye-Tracking Integration to Enhance Flight Safety</title>
		<author>
			<persName><forename type="first">Vsevolod</forename><forename type="middle">&amp;</forename><surname>Peysakhovich</surname></persName>
		</author>
		<author>
			<persName><surname>Lefrançois</surname></persName>
		</author>
		<author>
			<persName><surname>Olivier &amp; Dehais</surname></persName>
		</author>
		<author>
			<persName><surname>Frédéric &amp; Causse</surname></persName>
		</author>
		<author>
			<persName><surname>Mickael</surname></persName>
		</author>
		<idno type="DOI">4.8.10.3390/safety4010008</idno>
		<ptr target="https://www.researchgate.net/publication/323441671_The_Neuroergonomics_of_Aircraft_Cockpits_The_Four_Stages_of_Eye-Tracking_Integration_to_Enhance_Flight_Safety" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">Safety</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Towards Best Practices for Leveraging Human Language Processing Signals for Natural Language Processing</title>
		<author>
			<persName><forename type="first">N</forename><surname>Hollenstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barrett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Beinborn</surname></persName>
		</author>
		<ptr target="https://drive.google.com/file/d/1FxZso4wgjz2PFrKsZC7Elb-L5PXYZdEJ/view" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">LINCR</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning</title>
		<author>
			<persName><forename type="first">Erkut</forename><forename type="middle">&amp;</forename><surname>Erdem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Menekşe</forename><forename type="middle">&amp;</forename><surname>Kuyu</surname></persName>
		</author>
		<author>
			<persName><surname>Yagcioglu</surname></persName>
		</author>
		<author>
			<persName><surname>Semih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anette</forename><forename type="middle">&amp;</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Letitia</forename><forename type="middle">&amp;</forename><surname>Parcalabescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barbara</forename><forename type="middle">&amp;</forename><surname>Plank</surname></persName>
		</author>
		<author>
			<persName><surname>Babii</surname></persName>
		</author>
		<author>
			<persName><surname>Andrii &amp; Turuta</surname></persName>
		</author>
		<author>
			<persName><surname>Oleksii</surname></persName>
		</author>
		<author>
			<persName><surname>Erdem</surname></persName>
		</author>
		<author>
			<persName><surname>Calixto</surname></persName>
		</author>
		<author>
			<persName><surname>Iacer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elena</forename><forename type="middle">&amp;</forename><surname>Lloret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elena</forename><surname>Apostol</surname></persName>
		</author>
		<author>
			<persName><surname>Simona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ciprian-Octavian &amp; Šandrih</forename><surname>Truică</surname></persName>
		</author>
		<author>
			<persName><surname>Todorović</surname></persName>
		</author>
		<author>
			<persName><forename type="first">&amp;</forename><surname>Branislava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sanda</forename><forename type="middle">&amp;</forename><surname>Martinčić-Ipšić</surname></persName>
		</author>
		<author>
			<persName><surname>Berend</surname></persName>
		</author>
		<author>
			<persName><surname>Gábor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Albert</forename><forename type="middle">&amp;</forename><surname>Gatt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gražina</forename><surname>Korvel</surname></persName>
		</author>
		<idno type="DOI">10.1613/jair.1.12918</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="page" from="1131" to="1207" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Modeling Human Reading with Neural Attention</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Keller</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/D16-1009.pdf" />
	</analytic>
	<monogr>
		<title level="m">Conference on Empirical Methods in Natural Language Processing</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Information object storage model with accelerated text processing methods</title>
		<author>
			<persName><forename type="first">O</forename><surname>Barkovska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pyvovarova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kholiev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ivashchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rosinskyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Paper presented at the CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="286" to="299" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Assessing Deception in Questionnaire Surveys With Eye-Tracking</title>
		<author>
			<persName><forename type="first">X</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2021.774961/full</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/fpsyg.2021.774961/full" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Psychology</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Application of Eye Tracker in Lie Detection</title>
		<author>
			<persName><forename type="first">F</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">Q</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">L</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">C</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Hu</surname></persName>
		</author>
		<ptr target="https://pubmed.ncbi.nlm.nih.gov/32530172/" />
	</analytic>
	<monogr>
		<title level="j">Fa yi xue za zhi</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="229" to="232" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The eyes know it: FakeET-An Eyetracking Database to Understand Deepfake Perception</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chugh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dhall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanian</surname></persName>
		</author>
		<ptr target="https://arxiv.org/pdf/2006.06961.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 International Conference on Multimodal Interaction</title>
				<meeting>the 2020 International Conference on Multimodal Interaction</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking</title>
		<author>
			<persName><forename type="first">I</forename><surname>Demir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">A</forename><surname>Ciftci</surname></persName>
		</author>
		<ptr target="https://arxiv.org/pdf/2101.01165.pdf" />
	</analytic>
	<monogr>
		<title level="m">ACM Symposium on Eye Tracking Research and Applications</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Neurobiological test of mental state Anima</title>
		<ptr target="https://ua.anima.help/" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Torralba Eye Tracking for Everyone</title>
		<author>
			<persName><forename type="first">K</forename><surname>Krafka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Khosla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kellnhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kannan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhandarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Matusik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename></persName>
		</author>
		<idno>rXiv, abs/1606.05814</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Usage of phase space diagram to finding significant features of rhinomanometric signals</title>
		<author>
			<persName><forename type="first">A</forename><surname>Yerokhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Turuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Babii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nechyporenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Mahdalina</surname></persName>
		</author>
		<idno type="DOI">10.1109/STC-CSIT.2016.7589871</idno>
	</analytic>
	<monogr>
		<title level="m">XIth International Scientific and Technical Conference Computer Sciences and Information Technologies (CSIT)</title>
				<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="70" to="72" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Every word counts: A multilingual analysis of individual human alignment with model attention</title>
		<author>
			<persName><forename type="first">S</forename><surname>Brandl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hollenstein</surname></persName>
		</author>
		<idno>ArXiv, abs/2210.04963</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading</title>
		<author>
			<persName><forename type="first">U</forename><surname>Cop</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dirix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Drieghe</surname></persName>
		</author>
		<idno type="DOI">10.3758/s13428-016-0734-0</idno>
		<ptr target="https://doi.org/10.3758/s13428-016-0734-0" />
	</analytic>
	<monogr>
		<title level="j">Behav Res</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="602" to="615" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><surname>Kuperman</surname></persName>
		</author>
		<ptr target="https://meco-read.com/category/data-news/" />
		<title level="m">The Multilingual Eye-tracking Corpus (MECO)</title>
				<editor>
			<persName><surname>Siegelman</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Methods of Multilanguage Question Answering</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dashenkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Smelyakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Turuta</surname></persName>
		</author>
		<idno type="DOI">10.1109/PICST54195.2021.9772145</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 8th International Conference on Problems of Infocommunications, Science and Technology (PIC S&amp;T)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="251" to="255" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Ukrainian News Corpus as Text Classification Benchmark</title>
		<author>
			<persName><forename type="first">D</forename><surname>Panchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Maksymenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Turuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Luzan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tytarenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Turuta</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-14841-5_37</idno>
		<ptr target="https://doi.org/10.1007/978-3-031-14841-5_37" />
	</analytic>
	<monogr>
		<title level="m">Workshops. ICTERI 2021</title>
		<title level="s">Communications in Computer and Information Science</title>
		<editor>et al. ICTERI</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2022. 2021</date>
			<biblScope unit="volume">1635</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A study of eye tracking technology and its applications</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Punde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Jadhav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Manza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)</title>
				<meeting>the 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)<address><addrLine>Aurangabad, India; Piscataway, NJ, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017-10-06">5-6 October 2017. 2017</date>
			<biblScope unit="page" from="86" to="90" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
