<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Exploiting Micro Facial Expressions for More Inclusive User Interfaces</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessio</forename><surname>Ferrato</surname></persName>
							<email>ale.ferrato@stud.uniroma3.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Engineering</orgName>
								<orgName type="institution">Roma Tre University</orgName>
								<address>
									<addrLine>Via della Vasca Navale 79</addrLine>
									<postCode>00146</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Carla</forename><surname>Limongelli</surname></persName>
							<email>limongel@dia.uniroma3.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Engineering</orgName>
								<orgName type="institution">Roma Tre University</orgName>
								<address>
									<addrLine>Via della Vasca Navale 79</addrLine>
									<postCode>00146</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mauro</forename><surname>Mezzini</surname></persName>
							<email>mauro.mezzini@uniroma3.it</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Education</orgName>
								<orgName type="institution">Roma Tre University</orgName>
								<address>
									<addrLine>Viale del Castro Pretorio 20</addrLine>
									<postCode>00185</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Sansonetti</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Engineering</orgName>
								<orgName type="institution">Roma Tre University</orgName>
								<address>
									<addrLine>Via della Vasca Navale 79</addrLine>
									<postCode>00146</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Exploiting Micro Facial Expressions for More Inclusive User Interfaces</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">EF75583C920E785344502A3832D5A43B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>User interfaces</term>
					<term>User modeling</term>
					<term>Emotion recognition</term>
					<term>Computer vision</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Current image/video acquisition and analysis techniques allow for not only the identification and classification of objects in a scene but also more sophisticated processing. For example, there are video cameras today able to capture micro facial expressions, namely, facial expressions that occur in a fraction of a second. Such micro expressions can provide useful information to define a person's emotional state. In this article, we propose to use these features to collect useful information for designing and implementing increasingly effective interactive technologies. In particular, facial micro expressions could be used to develop interfaces capable of fostering the social and cultural inclusion of users belonging to different realities and categories. The preliminary experimental results obtained by recording the reactions of individuals while observing artworks demonstrate the existence of correlations between the action units (i.e., single components of the muscular movement in which it is possible to break down facial expressions) and the emotional reactions of a sample of users, as well as correlations within some homogeneous groups of testers.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction and Background</head><p>Systems capable of identifying a user's emotional state starting from her behavior are becoming more and more popular <ref type="bibr" target="#b0">[1]</ref>. Among these, Automatic Facial Expression Analysis (AFEA) <ref type="bibr" target="#b1">[2]</ref> systems are of particular importance. Facial expressions can be defined as facial changes in response to a person's internal emotional states, intentions, or social communications <ref type="bibr" target="#b2">[3]</ref>. This research topic is certainly not new if we consider that Darwin in 1872 had already addressed the subject in <ref type="bibr" target="#b3">[4]</ref>. Since then, there have been several attempts by behavioral scientists to conceive methods and models for the automatic analysis of facial expressions on image sequences <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. These studies have laid the foundations for the realization of computer systems able to help us understand this natural form of communication among human beings (e.g., see <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>). Such systems, although very efficient, are inevitably affected by context, culture, genre and so on <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>. In this article, we propose the analysis of facial micro expressions as a possible solution to these problems. Micro facial expressions are facial expressions that occur in a fraction of a second. They can provide accurate information about a person's actual emotional state, regardless of culture, language, and personal background. This information can, therefore, be exploited to create intelligent user interfaces, capable of capturing the real emotions of large communities of individuals, thus promoting cultural and social inclusion among individuals coming from different realities and belonging to different categories, including disadvantaged and at-risk groups, as well as vulnerable people. There are various applications and scenarios in which such intelligent interfaces could provide significant benefits, including recommender systems <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16]</ref>, intelligent tutoring systems <ref type="bibr" target="#b16">[17]</ref>, and, more generally, smart cities <ref type="bibr" target="#b17">[18]</ref>. To demonstrate the feasibility of our idea, we report the preliminary results of a user study conducted by recording the micro facial expressions of some testers in response to certain perceptual stimuli. Although this study was carried out in a specific domain (i.e., cultural heritage <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref>) and on a very limited and skewed sample of users, the results obtained show the existence of correlations between some action units (i.e., single components of the muscular movement in which facial expressions can be broken down) and emotional reactions. They also show that it is possible to identify common correlations within different categories of individuals. This somehow confirms our initial idea and encourages us to continue our experimental analysis, extending it to a more significant and heterogeneous sample of users.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Kinesics</head><p>Kinesics is the science that studies body language. According to the anthropologist Ray Birdwhistell, who coined this term in 1952, this science allows us to interpret a person's thoughts, feelings, and emotions by analyzing her facial expressions, gestures, posture, gaze, and movements of the legs and arms <ref type="bibr" target="#b20">[21]</ref>. Birdwhistell's theories were highly regarded over the years and it is well known that mere verbal communication represents only a small part of the message that allows two individuals to convey information to each other. According to the 7-38-55 Rule developed by Albert Mehrabian in the 1970s <ref type="bibr" target="#b21">[22]</ref>, communication takes place in three ways: the content (what is communicated), tone (how it is communicated), and body language (posture, expressions, etc). The digits that appear in the rule name indicate the percentage of the relevance of these ways: 7% the content of the message, 38% the tone of the voice, 55% the body language.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Facial expressions (FACS)</head><p>The kinesic system of signification and signaling includes the movements of the body, face, and eyes <ref type="bibr" target="#b22">[23]</ref>. Facial expressions manifest the intentions of the subject based on the context and depending on this there are facial expressions that differ substantially, also giving the listener the possibility to understand the state of mind of her interlocutor. In 1979 Paul Ekman and Wallace V. Friesen, based on the previously developed study by Swedish anatomist Carl-Herman Hjortsjö <ref type="bibr" target="#b23">[24]</ref>, proposed the Facial Action Coding System (FACS) <ref type="bibr" target="#b22">[23]</ref>, an anatomically accurate system to describe all visually distinguishable facial movements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.1.">Action Units (AUs)</head><p>The FACS decoding system explores facial expressions by breaking them down into the smallest fundamental units, the action units (AUs), giving each one a meaning. Ekman and Friesen cataloged 44 AUs describing changes in facial expressions and 14 AUs mapping changes in the eye gaze direction and the head orientation. The AUs play a fundamental role in the recognition of emotions, movements, and attitudes, not only of the face but also of the body, allowing us to analyze the state of mind of the subject. The combination of the AUs enables us to map the four main emotions, namely, happiness, sadness, anger, and fear <ref type="bibr" target="#b24">[25]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Data Collection</head><p>The research questions underlying the experimental analysis we performed are the following: is there a correlation between the micro facial expressions of an observer and her degree of appreciation (i.e., rating) of an artwork? Is it possible to identify correlations shared by specific categories of users? To answer these questions, it was necessary to collect the data that could allow us to verify our initial assumptions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">The development of a data collection system</head><p>At the beginning of our research activity, we had planned real experimentation in a suitable place to verify our hypotheses, for example, a museum. Unfortunately, the limitations imposed by the COVID-19 pandemic did not allow us to follow this road. Consequently, to collect data it was necessary to develop an online application. First of all, we developed a website<ref type="foot" target="#foot_0">1</ref> that had mainly two functions. The first function was to simulate a visit sharing the same characteristics as a visit to a real museum. For this purpose, we selected some artworks from those exhibited at the National Gallery of Modern and Contemporary Art<ref type="foot" target="#foot_1">2</ref> in Rome, Italy. The selection was made in such a way as to be able to show the user works as different as possible. The second function was to collect information about the visitor. In particular, we were interested in acquiring data relating to her demographic profile, degree of appreciation of the work displayed at that time, and resulting micro facial expressions. Specifically, participants were shown eight artworks and asked to rate each of them on a five-point Likert scale. Meanwhile, the participants were recorded through the webcam of their device while viewing each artwork. Demographic information was collected through a final questionnaire. Specifically, the demographic data relating to the users who participated in the experimental trials are shown in Table <ref type="table" target="#tab_0">1</ref>. The participants were 73, almost equally distributed between females and males, and aged mostly between 21 and 29. Most participants had a high school diploma and were mainly university students. Once the dataset was collected, it was necessary to process the recorded videos using facial recognition software. We employed two different software tools for this purpose: OpenFace<ref type="foot" target="#foot_2">3</ref> , an opensource toolkit capable of performing action unit analysis, and iMotions<ref type="foot" target="#foot_3">4</ref> , a proprietary software.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Data Analysis</head><p>Let us now analyze the results returned by the two analysis software. Table <ref type="table" target="#tab_1">2</ref> shows the average values, standard deviations, as well as the minimum and maximum values, calculated on the whole dataset. First of all, we can observe that the iMotions software returns more information than OpenFace and that the two software tools sometimes analyze the same micro expressions. The mean of the individual action units is often less than the standard deviation. At the same time, the minimum values differ highly from the maximum values. These results, therefore, indicate the tendency of visitors to assume a neutral expression for most of the time except in rare moments. The attention score, namely, the attention showed by the visitor while observing the artwork, is noteworthy. The average value is very close to the maximum value, and the deviation is very low. We can, hence, conclude that most testers kept high their level of attention during the virtual visit. Table <ref type="table" target="#tab_2">3</ref> shows the value of Spearman's correlation coefficient of the ratings assigned by the testers to the individual works and the average score obtained by the features for each video. We can immediately notice a high correlation value between ratings and eye closure. The same thing happens for perceived sadness. The negative value of these correlations indicates that a high value of the feature corresponds to a low rating attributed to the work. We then verified if there were any correlations shared by some categories of testers. More specifically, we grouped the data based on gender, the rating attributed to the artwork, and the number of recognized artworks. Table <ref type="table" target="#tab_3">4</ref> reports the values returned by OpenFace. We note a positive correlation value between the rating and the cheek raise action unit related to women. The same thing happens for the distension of the eyelids, both for women and for those who have recognized few works. Finally, for those who assigned a low rating, we found a negative correlation for the lifting of the lips and their sinking. In Table <ref type="table" target="#tab_4">5</ref>, we can instead observe the correlation values calculated on the results of iMotions. We can observe how eye closure is negatively correlated in the two groups. Also, sadness is negatively correlated in the two groups. Joy is also negatively correlated. And, finally, fear and contempt are negatively correlated for the same group of people who recognized a few works. These results, therefore, show how all features are negatively correlated, thus expressing that a high score of a given variable corresponds to a lower score. Further analyzes that we cannot report for reasons of space show how visitors who did not like the works expressed their low ratings more clearly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Works</head><p>The ultimate goal of our research activities was to verify whether facial micro expressions can be exploited to create interfaces that can adapt differently depending on the characteristics of the active user. If so, it would be possible to foster cultural and social inclusion between individuals from different backgrounds and belonging to different categories, including disadvantaged and at-risk categories as well as vulnerable people. In particular, from the experimental results, it emerged how it is pos-sible to identify some correlation between facial micro expressions and the degree of appreciation of an object, specifically an artwork. It is also possible to identify correlations within some homogeneous groups of testers.</p><p>Our experimental analysis is very simplified and also suffers from numerous limitations. Among others, it is evident as follows:</p><p>• it was performed in a specific domain, namely that of cultural heritage; • the micro facial expressions were collected in response to a specific stimulus, that is, the vision of an artwork; • the data was collected through a virtual and not live experimentation; • the sample of users was very limited; • the sample of users was mostly made up of university students, so it was anything but heterogeneous.</p><p>A much more extensive and rigorous experimental analysis is therefore needed, including further categories of users, scenarios (e.g., <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b27">28]</ref>), and information (e.g., <ref type="bibr" target="#b28">[29]</ref>). Only in this way we could indeed draw definitive conclusions on the existence of correlations between micro facial expressions and categories of testers.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Demographics of the 73 users involved in the experimental analysis</figDesc><table><row><cell></cell><cell>Item</cell><cell>Frequency</cell></row><row><cell>Gender</cell><cell>Female Male</cell><cell>37 36</cell></row><row><cell></cell><cell>Under 18</cell><cell>3</cell></row><row><cell></cell><cell>18-20</cell><cell>5</cell></row><row><cell></cell><cell>21-29</cell><cell>40</cell></row><row><cell>Age</cell><cell>30-39</cell><cell>3</cell></row><row><cell></cell><cell>40-49</cell><cell>3</cell></row><row><cell></cell><cell>50-59</cell><cell>12</cell></row><row><cell></cell><cell>Over 60</cell><cell>7</cell></row><row><cell></cell><cell>Primary school</cell><cell>1</cell></row><row><cell></cell><cell>8th grade diploma</cell><cell>9</cell></row><row><cell>Education</cell><cell>High school diploma</cell><cell>41</cell></row><row><cell></cell><cell>University degree</cell><cell>19</cell></row><row><cell></cell><cell>PhD</cell><cell>3</cell></row><row><cell></cell><cell>Unemployed</cell><cell>3</cell></row><row><cell></cell><cell>Student</cell><cell>39</cell></row><row><cell>Profession</cell><cell>Public employee Private employee</cell><cell>7 14</cell></row><row><cell></cell><cell>Self-employed</cell><cell>7</cell></row><row><cell></cell><cell>Retired</cell><cell>3</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Summary table of the output from the two software tools</figDesc><table><row><cell></cell><cell></cell><cell cols="2">iMotions</cell><cell></cell><cell></cell><cell cols="2">OpenFace</cell><cell></cell></row><row><cell>AU &amp; Emotions</cell><cell>Mean</cell><cell>Std</cell><cell>Min</cell><cell>Max</cell><cell>Mean</cell><cell>Std</cell><cell>Min</cell><cell>Max</cell></row><row><cell>Inner Brow Raise</cell><cell cols="3">5,099658 12,94021 0</cell><cell>80,29622</cell><cell cols="4">0,168434 0,141843 0,039858 1,462658</cell></row><row><cell>Brow Raise</cell><cell cols="3">3,565345 8,847247 0</cell><cell>55,49171</cell><cell cols="4">0,085252 0,061114 0,021103 0,478671</cell></row><row><cell>Brow Lower</cell><cell cols="3">5,334099 12,40427 0</cell><cell>76,77342</cell><cell cols="3">0,765825 0,739402 0</cell><cell>3,596304</cell></row><row><cell>Upper Lid Raiser</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">0,055565 0,031256 0,014244 0,245095</cell></row><row><cell>Cheek Raise</cell><cell cols="3">3,659209 10,67665 0</cell><cell>69,96562</cell><cell cols="3">0,390288 0,466435 0</cell><cell>2,387549</cell></row><row><cell>Lid Tighten</cell><cell>0,93787</cell><cell cols="2">2,604525 0</cell><cell>23,44269</cell><cell cols="3">0,616453 0,719307 0</cell><cell>3,199208</cell></row><row><cell>Nose Wrinkle</cell><cell cols="2">0,973915 3,62885</cell><cell>0</cell><cell>44,94059</cell><cell cols="4">0,063658 0,054983 0,013989 0,350426</cell></row><row><cell>Upper Lip Raise</cell><cell cols="3">1,135299 4,613869 0</cell><cell>44,57584</cell><cell cols="3">0,555492 0,527603 0</cell><cell>3,205763</cell></row><row><cell>Lip Corner Puller</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="3">0,397487 0,473547 0</cell><cell>2,572438</cell></row><row><cell>Dimpler</cell><cell cols="3">3,837253 7,598816 0</cell><cell>54,32411</cell><cell cols="3">0,570261 0,564813 0</cell><cell>2,724876</cell></row><row><cell>Lip Corner Depressor</cell><cell cols="3">1,766322 4,586096 0</cell><cell>41,22998</cell><cell cols="4">0,189903 0,220943 0,036511 1,946785</cell></row><row><cell>Chin Raise</cell><cell cols="3">2,785176 5,867499 0</cell><cell>37,18328</cell><cell cols="2">0,407547 0,2544</cell><cell cols="2">0,080133 1,586465</cell></row><row><cell>Lip Stretch</cell><cell cols="3">2,535029 7,484421 0</cell><cell>61,21821</cell><cell cols="2">0,117077 0,11238</cell><cell cols="2">0,030426 1,131618</cell></row><row><cell>Lip Tighten</cell><cell>0,93787</cell><cell cols="2">2,604525 0</cell><cell>23,44269</cell><cell cols="4">0,121904 0,123215 0,018549 0,929964</cell></row><row><cell>Mouth Open</cell><cell cols="3">6,867683 11,51858 0</cell><cell>66,08074</cell><cell cols="4">0,365305 0,331243 0,064533 2,580889</cell></row><row><cell>Jaw Drop</cell><cell cols="3">3,772275 6,797671 0</cell><cell>42,74697</cell><cell>0,36226</cell><cell>0,30048</cell><cell>0,0674</cell><cell>2,31789</cell></row><row><cell>Blink</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">0,169887 0,066811 0,041817 0,383651</cell></row><row><cell>Lip Suck</cell><cell cols="3">5,259716 9,547491 0</cell><cell>58,75693</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Lip Press</cell><cell cols="3">2,926959 5,577136 0</cell><cell>31,28165</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Lip Pucker</cell><cell cols="3">2,870787 7,183146 0</cell><cell>46,96508</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Eye Closure</cell><cell cols="2">1,966987 3,09202</cell><cell>0</cell><cell>30,51927</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Eye Widen</cell><cell cols="3">3,038526 7,873084 0</cell><cell>62,36883</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Smile</cell><cell cols="3">7,651248 16,54695 0</cell><cell>82,14044</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Smirk</cell><cell cols="3">2,030771 5,974433 0</cell><cell>62,60415</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Engagement</cell><cell cols="3">15,29063 20,46839 0</cell><cell>88,82519</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Attention</cell><cell cols="4">93,17853 11,72724 15,89159 98,63756</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Anger</cell><cell cols="3">0,473087 1,830745 0</cell><cell>21,59573</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Sadness</cell><cell cols="3">0,869082 2,900364 0</cell><cell>28,76604</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Disgust</cell><cell cols="3">1,297257 4,502729 0</cell><cell>61,42045</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Joy</cell><cell cols="3">5,829057 15,26311 0</cell><cell>83,61379</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Surprise</cell><cell cols="3">1,364944 3,271783 0</cell><cell>31,10703</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Fear</cell><cell cols="3">0,468503 1,842737 0</cell><cell>16,90147</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Contempt</cell><cell cols="3">1,431101 5,146328 0</cell><cell>64,36057</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc></figDesc><table><row><cell cols="2">Spearman's correlation coefficient</cell><cell></cell></row><row><cell></cell><cell>iMotions</cell><cell>OpenFace</cell></row><row><cell>AU &amp; Emotions</cell><cell cols="2">Spearman's Index</cell></row><row><cell>Inner Brow Raise</cell><cell>-0.07</cell><cell>-0.06</cell></row><row><cell>Outer Brow Raise</cell><cell>-0.01</cell><cell>-0.05</cell></row><row><cell>Brow Lower</cell><cell>-0.05</cell><cell>-0.06</cell></row><row><cell>Upper Lid Raise</cell><cell></cell><cell>-0.05</cell></row><row><cell>Cheek Raise</cell><cell>0.00</cell><cell>-0.05</cell></row><row><cell>Lid Tighten</cell><cell>-0.05</cell><cell>0.06</cell></row><row><cell>Nose Wrinkle</cell><cell>-0.04</cell><cell>-0,04</cell></row><row><cell>Upper Lip Raise</cell><cell>-0.03</cell><cell>-0.04</cell></row><row><cell>Lip Corner Puller</cell><cell></cell><cell>0.00</cell></row><row><cell>Dimpler</cell><cell>-0.02</cell><cell>-0.03</cell></row><row><cell>Lip Corner Depressor</cell><cell>-0.04</cell><cell>-0.06</cell></row><row><cell>Chin Raise</cell><cell>0.01</cell><cell>-0.07</cell></row><row><cell>Lip Stretch</cell><cell>-0.09</cell><cell>-0.04</cell></row><row><cell>Lid Tighten</cell><cell></cell><cell>-0.08</cell></row><row><cell>Mouth Open</cell><cell>-0.01</cell><cell>0.00</cell></row><row><cell>Jaw Drop</cell><cell>-0.05</cell><cell>-0.02</cell></row><row><cell>Blink</cell><cell></cell><cell>-0.08</cell></row><row><cell>Lip Suck</cell><cell>-0.03</cell><cell></cell></row><row><cell>Lip Press</cell><cell>-0.05</cell><cell></cell></row><row><cell>Lip Pucker</cell><cell>-0.06</cell><cell></cell></row><row><cell>Eye Closure</cell><cell>-0.17**</cell><cell></cell></row><row><cell>Eye Widen</cell><cell>0.03</cell><cell></cell></row><row><cell>Smile</cell><cell>-0.01</cell><cell></cell></row><row><cell>Smirk</cell><cell>0.04</cell><cell></cell></row><row><cell>Engagement</cell><cell>-0.04</cell><cell></cell></row><row><cell>Attention</cell><cell>-0.05</cell><cell></cell></row><row><cell>Anger</cell><cell>-0.05</cell><cell></cell></row><row><cell>Sadness</cell><cell>-0.13*</cell><cell></cell></row><row><cell>Disgust</cell><cell>-0.02</cell><cell></cell></row><row><cell>Joy</cell><cell>-0.09</cell><cell></cell></row><row><cell>Surprise</cell><cell>-0.07</cell><cell></cell></row><row><cell>Fear</cell><cell>-0.05</cell><cell></cell></row><row><cell>Contempt</cell><cell>-0.07</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>Correlations between homogeneous groups in OpenFace</figDesc><table><row><cell>Groups</cell><cell cols="2">Male Female</cell><cell>Low ratings</cell><cell>High ratings</cell><cell>Low frequency</cell><cell>High frequency</cell><cell>Few recognized</cell><cell>Many recognized</cell><cell>Low interest</cell><cell>High interest</cell></row><row><cell># Measurements</cell><cell>24</cell><cell>22</cell><cell>125</cell><cell>165</cell><cell>19</cell><cell>5</cell><cell>41</cell><cell>1</cell><cell>0</cell><cell>15</cell></row><row><cell>Inner Brow Raise</cell><cell>-0.11</cell><cell>-0.01</cell><cell>-0.08</cell><cell>0.01</cell><cell>-0.01</cell><cell>0.07</cell><cell>-0.06</cell><cell>0.39</cell><cell>0</cell><cell>-0.07</cell></row><row><cell>Brow Raise</cell><cell>-0.05</cell><cell>-0.04</cell><cell>-0.08</cell><cell>0.06</cell><cell>0.05</cell><cell>0.03</cell><cell>-0.05</cell><cell>0.39</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Brow Lower</cell><cell>-0.02</cell><cell>-0.09</cell><cell>-0.03</cell><cell>0.01</cell><cell>-0.06</cell><cell>0.00</cell><cell>-0.06</cell><cell>-0.39</cell><cell>0</cell><cell>-0.15</cell></row><row><cell>Upper Lid Raiser</cell><cell>-0.11</cell><cell>0.02</cell><cell>-0.11</cell><cell>-0.07</cell><cell>0.05</cell><cell>-0.06</cell><cell>-0.06</cell><cell>0.23</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Cheek Raise</cell><cell>-0.04</cell><cell>0.15*</cell><cell>-0.06</cell><cell>-0.04</cell><cell>0.05</cell><cell>0.06</cell><cell>0.06</cell><cell>-0.49</cell><cell>0</cell><cell>-0.02</cell></row><row><cell>Lid Tighten</cell><cell>-0.05</cell><cell>0.17*</cell><cell>-0.05</cell><cell>0.01</cell><cell>0.04</cell><cell>0.19</cell><cell>0.12*</cell><cell>0.71*</cell><cell>0</cell><cell>0.00</cell></row><row><cell>Nose Wrinkle</cell><cell>-0.06</cell><cell>-0.02</cell><cell>-0.15</cell><cell>-0.06</cell><cell>-0.08</cell><cell>0.38*</cell><cell>-0.05</cell><cell>0.00</cell><cell>0</cell><cell>0.02</cell></row><row><cell>Upper Lip Raise</cell><cell>-0.02</cell><cell>-0.09</cell><cell>-0.19*</cell><cell>-0.07</cell><cell>0.06</cell><cell>-0.16</cell><cell>-0.06</cell><cell>-0.05</cell><cell>0</cell><cell>-0.03</cell></row><row><cell>Lip Corner Puller</cell><cell>-0.05</cell><cell>0.05</cell><cell>-0.19*</cell><cell>-0.01</cell><cell>0.06</cell><cell>-0.03</cell><cell>0.00</cell><cell>0.10</cell><cell>0</cell><cell>-0.01</cell></row><row><cell>Dimpler</cell><cell>-0.02</cell><cell>-0.03</cell><cell>-0.17</cell><cell>-0.07</cell><cell>0.10</cell><cell>-0.06</cell><cell>-0.04</cell><cell>0.15</cell><cell>0</cell><cell>-0.09</cell></row><row><cell>Lip Corner Depressor</cell><cell>-0.10</cell><cell>0.00</cell><cell>-0.05</cell><cell>0.04</cell><cell>0.01</cell><cell>0.18</cell><cell>-0.06</cell><cell>-0.28</cell><cell>0</cell><cell>-0.10</cell></row><row><cell>Chin Raise</cell><cell>-0.10</cell><cell>-0.03</cell><cell>-0.13</cell><cell>-0.05</cell><cell>-0.08</cell><cell>0.20</cell><cell>-0.06</cell><cell>-0.23</cell><cell>0</cell><cell>-0.09</cell></row><row><cell>Lip Stretch</cell><cell>-0.09</cell><cell>0.03</cell><cell>0.00</cell><cell>0.00</cell><cell>0.06</cell><cell>0.10</cell><cell>-0.05</cell><cell>-0.15</cell><cell>0</cell><cell>-0.08</cell></row><row><cell>Lip Tighten</cell><cell>-0.08</cell><cell>-0.05</cell><cell>-0.09</cell><cell>-0.01</cell><cell>-0.07</cell><cell>0.04</cell><cell>-0.10</cell><cell>-0.54</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Mouth Open</cell><cell>-0.04</cell><cell>0.04</cell><cell>-0.14</cell><cell>0.07</cell><cell>0.03</cell><cell>0.18</cell><cell>0.00</cell><cell>0.13</cell><cell>0</cell><cell>-0.05</cell></row><row><cell>Jaw Drop</cell><cell>-0.06</cell><cell>0.04</cell><cell>-0.03</cell><cell>0.01</cell><cell>0.03</cell><cell>0.16</cell><cell>-0.02</cell><cell>-0.31</cell><cell>0</cell><cell>-0.05</cell></row><row><cell>Blink</cell><cell>-0.08</cell><cell>-0.08</cell><cell>-0.06</cell><cell>-0.09</cell><cell>-0.05</cell><cell>0.33*</cell><cell>-0.08</cell><cell>-0.28</cell><cell>0</cell><cell>-0.10</cell></row><row><cell cols="4">p &lt; .0001 '****'; p &lt; .001 '***', p &lt; .01 '**', p &lt; .05 '*'</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 5</head><label>5</label><figDesc>Correlations between homogeneous groups in iMotions</figDesc><table><row><cell>Groups</cell><cell>Male</cell><cell>Female</cell><cell>Low ratings</cell><cell>High ratings</cell><cell>Low frequency</cell><cell>High frequency</cell><cell>Few recognized</cell><cell>Many recognized</cell><cell>Low interest</cell><cell>High interest</cell></row><row><cell># Measurements</cell><cell>24</cell><cell>22</cell><cell>125</cell><cell>165</cell><cell>19</cell><cell>5</cell><cell>41</cell><cell>1</cell><cell>0</cell><cell>15</cell></row><row><cell>Brow Furrow</cell><cell>-0.04</cell><cell>-0.06</cell><cell>-0.05</cell><cell>0.04</cell><cell>0.00</cell><cell>0.11</cell><cell>-0.05</cell><cell>0.28</cell><cell>0</cell><cell>-0.08</cell></row><row><cell>Brow Raise</cell><cell>-0.02</cell><cell>0.00</cell><cell>-0.08</cell><cell>0.00</cell><cell>-0.06</cell><cell>0.12</cell><cell>-0.01</cell><cell>0.33</cell><cell>0</cell><cell>0.03</cell></row><row><cell>Engagement</cell><cell>-0.12</cell><cell>0.04</cell><cell>-0.12</cell><cell>0.04</cell><cell>-0.03</cell><cell>-0.04</cell><cell>-0.04</cell><cell>0.00</cell><cell>0</cell><cell>-0.11</cell></row><row><cell>Lip Corner Depressor</cell><cell>-0.07</cell><cell>0.01</cell><cell>0.05</cell><cell>0.03</cell><cell>-0.05</cell><cell>0.35*</cell><cell>-0.06</cell><cell>-0.49</cell><cell>0</cell><cell>0.02</cell></row><row><cell>Smile</cell><cell>-0.13</cell><cell>0.11</cell><cell>-0.19*</cell><cell>0.01</cell><cell>0.05</cell><cell>-0.07</cell><cell>0.04</cell><cell>-0.18</cell><cell>0</cell><cell>-0.11</cell></row><row><cell>Attention</cell><cell>0.00</cell><cell>-0.10</cell><cell>0.15</cell><cell>-0.15</cell><cell>-0.22**</cell><cell>-0.14</cell><cell>-0.03</cell><cell>-0.69</cell><cell>0</cell><cell>-0.03</cell></row><row><cell>Inner Brow Raise</cell><cell>-0.13</cell><cell>0.01</cell><cell>0.06</cell><cell>-0.04</cell><cell>-0.09</cell><cell>0.17</cell><cell>-0.06</cell><cell>-0.67</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Eye Closure</cell><cell>-0.13</cell><cell>-0.21**</cell><cell>-0.02</cell><cell>-0.07</cell><cell>-0.20*</cell><cell>0.19</cell><cell>-0.21***</cell><cell>-0.08</cell><cell>0</cell><cell>-0.09</cell></row><row><cell>Nose Wrinkle</cell><cell>-0.06</cell><cell>0.01</cell><cell>-0.14</cell><cell>-0.03</cell><cell>-0.02</cell><cell>0.01</cell><cell>-0.01</cell><cell>0.08</cell><cell>0</cell><cell>-0.07</cell></row><row><cell>Upper Lip Raise</cell><cell>-0.05</cell><cell>0.01</cell><cell>-0.12</cell><cell>-0.05</cell><cell>-0.01</cell><cell>0.08</cell><cell>-0.01</cell><cell>0.31</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Lip Suck</cell><cell>-0.07</cell><cell>0.00</cell><cell>-0.09</cell><cell>0.02</cell><cell>-0.05</cell><cell>-0.02</cell><cell>-0.02</cell><cell>0.80*</cell><cell>0</cell><cell>-0.08</cell></row><row><cell>Lip Press</cell><cell>-0.09</cell><cell>0.00</cell><cell>-0.09</cell><cell>-0.02</cell><cell>-0.04</cell><cell>0.03</cell><cell>-0.05</cell><cell>0.44</cell><cell>0</cell><cell>-0.06</cell></row><row><cell>Mouth Open</cell><cell>-0.05</cell><cell>0.03</cell><cell>-0.11</cell><cell>0.02</cell><cell>0.08</cell><cell>0.14</cell><cell>0.01</cell><cell>0.10</cell><cell>0</cell><cell>-0.11</cell></row><row><cell>Chin Raise</cell><cell>-0.06</cell><cell>0.11</cell><cell>-0.08</cell><cell>0.06</cell><cell>-0.06</cell><cell>-0.04</cell><cell>0.00</cell><cell>0.28</cell><cell>0</cell><cell>0.02</cell></row><row><cell>Smirk</cell><cell>-0.01</cell><cell>0.12</cell><cell>-0.13</cell><cell>0.06</cell><cell>0.06</cell><cell>-0.11</cell><cell>0.02</cell><cell>0.69</cell><cell>0</cell><cell>0.06</cell></row><row><cell>Lip Pucker</cell><cell>0.06</cell><cell>0.06</cell><cell>-0.13</cell><cell>0.04</cell><cell>0.03</cell><cell>-0.06</cell><cell>0.02</cell><cell>-0.05</cell><cell>0</cell><cell>0.07</cell></row><row><cell>Anger</cell><cell>-0.05</cell><cell>-0.04</cell><cell>0.02</cell><cell>0.06</cell><cell>-0.01</cell><cell>0.16</cell><cell>-0.08</cell><cell>0.33</cell><cell>0</cell><cell>-0.08</cell></row><row><cell>Sadness</cell><cell>-0.14*</cell><cell>-0.13</cell><cell>-0.04</cell><cell>0.00</cell><cell>-0.19*</cell><cell>0.13</cell><cell>-0.15**</cell><cell>-0.23</cell><cell>0</cell><cell>-0.06</cell></row><row><cell>Disgust</cell><cell>0.01</cell><cell>-0.03</cell><cell>-0.04</cell><cell>0.02</cell><cell>0.00</cell><cell>0.33*</cell><cell>-0.05</cell><cell>-0.10</cell><cell>0</cell><cell>0.05</cell></row><row><cell>Joy</cell><cell>-0.16*</cell><cell>0.00</cell><cell>-0.16</cell><cell>-0.02</cell><cell>-0.05</cell><cell>0.01</cell><cell>-0.05</cell><cell>-0.36</cell><cell>0</cell><cell>-0.17*</cell></row><row><cell>Surprise</cell><cell>-0.07</cell><cell>-0.07</cell><cell>-0.12</cell><cell>-0.02</cell><cell>-0.11</cell><cell>-0.21</cell><cell>-0.08</cell><cell>0.33</cell><cell>0</cell><cell>-0.01</cell></row><row><cell>Fear</cell><cell>-0.05</cell><cell>-0.05</cell><cell>0.02</cell><cell>-0.07</cell><cell>-0.02</cell><cell>0.06</cell><cell>-0.12*</cell><cell>-0.08</cell><cell>0</cell><cell>-0.05</cell></row><row><cell>Contempt</cell><cell>-0.07</cell><cell>-0.06</cell><cell>0.02</cell><cell>0.06</cell><cell>-0.08</cell><cell>0.04</cell><cell>-0.12*</cell><cell>0.72*</cell><cell>0</cell><cell>0.01</cell></row><row><cell>Cheek Raise</cell><cell>-0.11</cell><cell>0.11</cell><cell>-0.17</cell><cell>0.03</cell><cell>0.05</cell><cell>-0.05</cell><cell>0.04</cell><cell>0.03</cell><cell>0</cell><cell>-0.10</cell></row><row><cell>Dimpler</cell><cell>-0.09</cell><cell>0.05</cell><cell>-0.12</cell><cell>-0.04</cell><cell>0.02</cell><cell>-0.04</cell><cell>-0.02</cell><cell>0.41</cell><cell>0</cell><cell>-0.07</cell></row><row><cell>Eye Widen</cell><cell>0.04</cell><cell>0.03</cell><cell>-0.02</cell><cell>-0.08</cell><cell>0.13</cell><cell>-0.35*</cell><cell>-0.04</cell><cell>-0.08</cell><cell>0</cell><cell>0.00</cell></row><row><cell>Lid Tighten</cell><cell>-0.13</cell><cell>0.02</cell><cell>-0.07</cell><cell>-0.02</cell><cell>-0.18*</cell><cell>0.27</cell><cell>-0.01</cell><cell>-0.39</cell><cell>0</cell><cell>-0.04</cell></row><row><cell>Lip Stretch</cell><cell>-0.11</cell><cell>-0.07</cell><cell>-0.16</cell><cell>-0.12</cell><cell>-0.03</cell><cell>-0.25</cell><cell>-0.07</cell><cell>-0.08</cell><cell>0</cell><cell>-0.12</cell></row><row><cell cols="4">p &lt; .0001 '****'; p &lt; .001 '***', p &lt; .01 '**', p &lt; .05 '*'</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.raccoltadati.tk/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://lagallerianazionale.com/en/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://github.com/TadasBaltrusaitis/OpenFace</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://imotions.com/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Multimodal behavior analysis in the wild: An introduction</title>
		<author>
			<persName><forename type="first">X</forename><surname>Alameda-Pineda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Multimodal Behavior Analysis in the Wild, Computer Vision and Pattern Recognition</title>
				<editor>
			<persName><forename type="first">X</forename><surname>Alameda-Pineda</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Ricci</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</editor>
		<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Facial expression recognition with cnn-lstm</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">T</forename><surname>Hung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Tien</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Research in Intelligent and Computing in Engineering</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><forename type="middle">H</forename><surname>Quang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Solanki</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">K</forename><surname>Cardona</surname></persName>
		</editor>
		<editor>
			<persName><surname>Pattnaik</surname></persName>
		</editor>
		<meeting><address><addrLine>Singapore, Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="549" to="560" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Facial expression recognition</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kanade</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Cohn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Face Recognition</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">Z</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Jain</surname></persName>
		</editor>
		<meeting><address><addrLine>London, London</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="487" to="519" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Darwin</surname></persName>
		</author>
		<title level="m">The expression of the emotions in man and animals</title>
				<meeting><address><addrLine>London, UK</addrLine></address></meeting>
		<imprint>
			<publisher>John Marry</publisher>
			<date type="published" when="1872">1872</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">The facial action coding system: a technique for the measurement of facial movement</title>
		<author>
			<persName><forename type="first">E</forename><surname>Friesen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ekman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1978">1978</date>
			<biblScope unit="volume">3</biblScope>
			<pubPlace>Palo Alto</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Ekman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Scherer</surname></persName>
		</author>
		<title level="m">Handbook of methods in nonverbal behavior research</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="1982">1982</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Automatic recognition of facial actions in spontaneous expressions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bartlett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Littlewort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lainscsek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Fasel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Movellan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Multimedia</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="22" to="35" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Automatic temporal segment detection and affect recognition from face and body display</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Piccardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="64" to="84" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Detection, tracking, and classification of action units in facial expression</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kanade</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Recognizing action units for facial expression analysis</title>
		<author>
			<persName><forename type="first">T</forename><surname>Y.-L. Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Kanade</surname></persName>
		</author>
		<author>
			<persName><surname>Cohn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Pattern Anal. Mach. Intell</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="97" to="115" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Do facial expressions signal specific emotions? judging emotion from the face in context</title>
		<author>
			<persName><forename type="first">J</forename><surname>Carroll</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Russell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality and social psychology</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="205" to="218" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Culture and the categorization of emotions</title>
		<author>
			<persName><forename type="first">J</forename><surname>Russell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological bulletin</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="426" to="450" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Is there universal recognition of emotion from facial expression? a review of the crosscultural studies</title>
		<author>
			<persName><forename type="first">J</forename><surname>Russell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological bulletin</title>
		<imprint>
			<biblScope unit="volume">115</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Point of interest recommendation based on social and linked open data</title>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Personal and Ubiquitous Computing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="199" to="214" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Semantic-based tag recommendation in scientific bookmarking systems</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A M</forename><surname>Hassan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gasparetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Micarelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th ACM Conference on Recommender Systems</title>
				<meeting>the 12th ACM Conference on Recommender Systems<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="465" to="469" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Exploiting semantics for context-aware itinerary recommendation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fogli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Personal and Ubiquitous Computing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="215" to="231" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Building ontology-driven tutoring models for intelligent tutoring systems using data mining</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>D'aniello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaeta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Orciuoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sampson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Simonelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="48151" to="48162" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Knowledge-based smart city service system</title>
		<author>
			<persName><forename type="first">G</forename><surname>D'aniello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaeta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Orciuoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sorgente</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Enhancing cultural recommendations through social and linked open data</title>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gasparetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Micarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modeling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="121" to="159" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Tracking museum visitors through convolutional object detectors</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mezzini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Limongelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">De</forename><surname>Medio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adjunct Publication of UMAP &apos;20</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="352" to="355" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Birdwhistell</surname></persName>
		</author>
		<title level="m">Kinesics and context: Essays on body motion communication</title>
				<imprint>
			<publisher>University of Pennsylvania press</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Decoding of inconsistent communications</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mehrabian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wiener</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality and social psychology</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="109" to="114" />
			<date type="published" when="1967">1967</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Ekman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Friesen</surname></persName>
		</author>
		<title level="m">Facial Action Coding System</title>
				<imprint>
			<publisher>Consulting Psychologists Press</publisher>
			<date type="published" when="1978">1978</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Man&apos;s Face and Mimic Language</title>
		<author>
			<persName><forename type="first">C</forename><surname>Hjortsjö</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1969">1969</date>
			<publisher>Studentlitteratur</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Differences in facial expressions of four universal emotions</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">G</forename><surname>Kohler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Turner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M</forename><surname>Stolar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">B</forename><surname>Bilker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Brensinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Gur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Gur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychiatry Research</title>
		<imprint>
			<biblScope unit="volume">128</biblScope>
			<biblScope unit="page" from="235" to="244" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A signal-based approach to news recommendation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Caldarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">F</forename><surname>Gurini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Micarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<ptr target="org" />
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Aachen, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1618</biblScope>
		</imprint>
		<respStmt>
			<orgName>CEUR-WS.</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A comparative analysis of personality-based music recommender systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Onori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Micarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<ptr target="org" />
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Aachen, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1680</biblScope>
		</imprint>
		<respStmt>
			<orgName>CEUR-WS.</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">A comparative analysis of state-of-the-art recommendation techniques in the movie domain</title>
		<author>
			<persName><forename type="first">D</forename><surname>Valeriani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sansonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Micarelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<imprint>
			<biblScope unit="volume">12252</biblScope>
			<biblScope unit="page" from="104" to="118" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches</title>
		<author>
			<persName><forename type="first">M</forename><surname>Saneiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Salmeron-Majadas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Boticario</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Scientific World Journal</title>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
