<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Iryna</forename><surname>Spivak</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">West</orgName>
								<orgName type="institution" key="instit2">Ukrainian National University</orgName>
								<address>
									<addrLine>Lvivska str. 11</addrLine>
									<postCode>46009</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Svitlana</forename><surname>Krepych</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">West</orgName>
								<orgName type="institution" key="instit2">Ukrainian National University</orgName>
								<address>
									<addrLine>Lvivska str. 11</addrLine>
									<postCode>46009</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksandr</forename><surname>Fedorov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">West</orgName>
								<orgName type="institution" key="instit2">Ukrainian National University</orgName>
								<address>
									<addrLine>Lvivska str. 11</addrLine>
									<postCode>46009</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Serhii</forename><surname>Spivak</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Ternopil Ivan Puluj National Technical University</orgName>
								<address>
									<addrLine>Ruska str. 56</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">08FEB8D6FA362ABB636E7C131247A198</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T11:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Recognition</term>
					<term>Visualized Human Emotions</term>
					<term>Pixel</term>
					<term>Color Model</term>
					<term>Marketing Decision</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article proposes an approach to the recognition of visualized human emotions for marketing decision-making systems. The analysis of previous studies has shown the relevance and expediency of the proposed approach, as it will reduce the use of computing resources to implement the recognition process, and at the same time, increase the speed of obtaining the result. The article presents an algorithm for step-by-step identification of visualized human emotion based on the comparison of changes in the positions of key points of the selected element in accordance with changes in the characteristics of this element.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Trends in any commercial activity show that making marketing decisions that will provide the greatest impact on the consumer in making decisions to maximize profits are relevant today. In recent years, in the area of marketing, the subject of intensive research has become nonverbal information, namely the study of facial expressions. It is known from psychology that all human emotions can be classified into six basic emotions, which are most used to obtain nonverbal information. The ability to automatically recognize this kind of information will simplify the interpretation of emotions on a person's face while watching advertising, product testing or using the service.</p><p>The proposed approach will help to understand whether the consumer really liked the product, what color, size or smell he prefers, etc., as the survey can often get inaccurate information. The proposed approach will allow you to see the informal reaction of users, which will help to understand what necessary to focus on and what to improve.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related works</head><p>The development of emotion recognition in the vast majority of methods occurs in three steps <ref type="bibr" target="#b0">[1]</ref>. In the first step, functions are defined from fixed images, and in the second step, emotions are detected with the help of already developed classifiers and in the third step is face recognition itself. The most common are Local Binary Patters (LBP) <ref type="bibr" target="#b1">[2]</ref> is a description of the pixels around the central pixel in binary form. The COLINS-2021: 5th International Conference on Computational Linguistics and Intelligent Systems, April 22-23, 2021, Kharkiv, Ukraine EMAIL: spivak.iruna@gmail.com (I. Spivak); msya220189@gmail.com (S. Krepych); fedorov.oleks@gmail.com (O. Fedorov); spivaksm@ukr.net (S. Spivak) ORCID: 0000-0003-4831-0780 (I. Spivak); 0000-0001-7700-8367 (S. Krepych); 0000-0002-8080-9306 (O. Fedorov); 0000-0002-7160-2151 (S. <ref type="bibr">Spivak)</ref> LBP operator is applied to the central pixel of the image, it uses 8 pixels that are around it taking the central pixel as the main. The main disadvantage of this method is that the image needs high-quality preprocessing due to high sensitivity to noises. Geometric approach to face recognitionis one of the first developed methods it consists of choosing the key points of the face such as lips, center of the eye, etc. This method does not require an expensive equipment but it affects the low reliability of the method.  Viola-Jones method and Haar features <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b14">15]</ref> -is the most popular method of finding the facial area in images because of its relatively high speed and efficiency. Face recognition in this method is based on three basic principles:  Active appearance model (AAM) -are statistical models of images that can be adjusted to the real image by various deformations. Fitting the model to a specific image of the face is performed in the process of solving the optimization problem, the essence of which is to minimize the functionality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: The process of realization of active appearance model</head><p>However, these approaches require a lot of time and resources for training, which limits their use in a large sample of input data <ref type="bibr">[7-9, 11, 12]</ref>. The article proposes an approach to recognizing the emotions of the human face by definition and changing the positions of key points of the eyes, mouth and eyebrows, which does not require large computational resources for its implementation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Overview of the Research</head><p>Emotion is one of the basic elements of the human psyche. A human's emotions are distinguished depending on the state of satisfaction of his needs. They can be both positive and negative, as well as neutral, when a person doesn't react in any way and remains in its original state. For example, looking the advertising -some people feel anger or disgust, while others -pleasure and interest. A person's behavior also changes depending on what emotions the person is experiencing.</p><p>When making marketing decisions, the automated system must be able to identify and recognize one of the six basic human emotions:</p><p> Surprise -a short-term feeling and state of a person that occurs during a sudden and unexpected situation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Fear is a state of anxiety and restlessness caused by the expectation of something undesirable or unpleasant.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Disgust -a feeling of disapproval towards someone or something.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Angry is a strong feeling of dissatisfaction that arises when a person's needs or expectations have not satisfied.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Happy is a feeling of satisfaction that arises when a person's needs or expectations have been met.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Sad is a feeling opposite to happy, which arises in case of loss and helplessness of a person. In the future, depending on the emotion reflected on the face, this area will increase or decrease. For example, let's take two emotions: disgust and happy, which are very similar when identifying key points. In both, the oral area increased and the eye area decreased. Once the program determines this, the search for the next key points will be narrowed only to these two emotions, and will not pass all six, which will save time to identify the required final emotion, in contrast to existing methods and approaches <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b13">14]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed approach</head><p>Our performed analysis of this issue showed that to determine a human's emotions it is enough to choose the key elements on the face, namely the eyes, eyebrows, nose and mouth, and not to identify the whole face. The algorithm of proposed approach includes the following steps:</p><p>Step 1. Determining a face image from a photo or video and convert it into black and white using the capabilities of the CSS-filter function, which can be implemented on any PC hardware, as it doesn't require large resources.</p><p>Step 2. Selection of key elements of the face and their processing in the HSL (Hue Saturation Luminance) color model (Fig. <ref type="figure" target="#fig_6">6</ref>), where Hue -means both color and hue; Saturation -indicates the amount of gray color; Luminance is the intensity of light projected on a given area and direction. Step 3. Each pixel of the photo is replaced by its numerical value (the darker the hue of black, the smaller the number and vice versa) (Fig. <ref type="figure" target="#fig_7">7</ref>). These numerical values will be used to search for key points of the selected facial elements based on the algorithm for finding the nearest neighbor, namely the hue of black. When the hue of darkness decreases by more than 15% (perhaps more, its need to check by the software)this means that the next pixel (depending on its direction) doesn't need to be estimated. This will give us clear contour of the element we are estimating.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results &amp; Discussion</head><p>Based on the emotion of fear, we will show how a human's face will change, and how software will be able to detect it. For example, from the fixed image of the face the element "eye" for research is determinate.   <ref type="figure" target="#fig_10">9a</ref>) shows the image of the human eye in a neutral state, while Figure <ref type="figure" target="#fig_10">9b</ref>) -the expansion of the eye during the action of the emotion of fear. It is known that when a person is afraid, his eyes expand and their area increases accordingly. Counting the number of pixels from the extreme left point (1) to the extreme right (2) we get 31 pixels in the neutral state and 32 pixels in the fear state. The difference of 1 pixel is not significant, because during fear the eyes cannot increase in width, only in height. Counting the number of pixels from the uppermost point (3) to the lowermost point (4) we get 17 pixels in the neutral state and 23 pixels in the fear state. Now that we have this data, we can calculate the area of the eyes, which will show us its increase during the action of the emotion of fear.</p><p>The next key point is the eyebrows.  <ref type="figure" target="#fig_12">10b</ref>) -under the influence of fear. As we can see in a neutral emotional state, the distance between the eyebrows (from point 2 to point 1a) is 37 pixels, the distance between the eyebrows in a state of fear -32 pixels. You should also pay attention to the distance from the eyes to the eyebrows. In neutral state, the distance for the left eyebrow is 13 pixels and for the right 8. In a frightened person, this distance is 4 pixels for the right and left eyebrows.</p><p>The last key point in recognizing the emotion of fear is the corners of the lips.   <ref type="figure" target="#fig_13">11a</ref>) shows the area of the human mouth in a neutral state and it is 21 pixels. In the state of fear person opens his mouth and, accordingly, changes its size. In our case it is 28 pixels (Fig. <ref type="figure" target="#fig_13">11b</ref>).</p><p>According to the study, we can generate table 1, which contains data on the change of distance in pixels between the extreme points of the selected elements of the face to facilitate the identification of the image of emotion.  According to the data in Table <ref type="table" target="#tab_0">1</ref>, the programing search for the emotion fixed in the image will be as follows. First, for example, choose one element "eyes" and check it for conformity with the characteristics of emotions when they are detected (Fig. <ref type="figure" target="#fig_15">12</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 12: Identification of emotions according to the characteristics of the eyes</head><p>The figure shows that the further search will continue in one of three directions if eyes are dilated (Fear, Surprise or Disgust) or (Angry or Sad) if they are squinted, and for the last if eyes are unchanged (Happy or Neutral Emotion) can be applied. The next steps are to cut off unnecessary emotions by checking other characteristics. To determine the change in the position of key points of the eyes, eyebrows and mouth, the ranges of these changes should be set in the interval forms <ref type="bibr" target="#b9">[10]</ref>, which will take into account the physiological characteristics of human faces.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Example of an applied LBP pattern to the image</figDesc><graphic coords="2,72.00,122.60,469.05,159.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Mandatory frontal image of the person;  Does not take into account the possibility of changing facial expressions.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Face distribution using the geometrical method</figDesc><graphic coords="2,226.23,450.74,159.55,174.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Integral representation of the image on the basis of Haar, which allows you to calculate the necessary features;  Classifier construction method based on adaptive boosting algorithm (AdaBoost);  Method of combining classifiers into a cascade structure. Disadvantages:  At an angle of 30 * or more the probability of recognition drops rapidly;  Makes it impossible to detect a person at an arbitrary angle;  Takes a lot of training time;  Sensitive to lighting.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Haar feature used for Viola Jones face recognition method</figDesc><graphic coords="3,109.15,177.93,411.70,239.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Detection of emotions on a human's face</figDesc><graphic coords="4,75.75,362.16,460.40,254.97" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The selected pixel area</figDesc><graphic coords="5,172.27,306.82,288.75,124.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Numerical matrix of pixel values of the selected area</figDesc><graphic coords="5,160.50,545.94,309.00,121.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 8</head><label>8</label><figDesc>Figure8shows a diagram of the implementation of the proposed method, which includes three main modules: image determination, conversion and identification.</figDesc><graphic coords="6,387.20,117.20,133.40,184.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: The scheme of implementation of the proposed approach</figDesc><graphic coords="6,225.60,117.20,133.40,184.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 9 :</head><label>9</label><figDesc>Images of eye expansion in the color model: a) a neutral human state; b) in a state of fear</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure</head><label></label><figDesc>Figure9a) shows the image of the human eye in a neutral state, while Figure9b) -the expansion of the eye during the action of the emotion of fear. It is known that when a person is afraid, his eyes expand and their area increases accordingly. Counting the number of pixels from the extreme left point (1) to the extreme right (2) we get 31 pixels in the neutral state and 32 pixels in the fear state. The difference of 1 pixel is not significant, because during fear the eyes cannot increase in width, only in height. Counting the number of pixels from the uppermost point (3) to the lowermost point (4) we get 17 pixels in the neutral state and 23 pixels in the fear state. Now that we have this data, we can calculate the area of the eyes, which will show us its increase during the action of the emotion of fear.The next key point is the eyebrows.</figDesc><graphic coords="6,108.00,407.41,395.55,146.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 10 :</head><label>10</label><figDesc>Figure10a) shows the eyebrows and eyes of a human in a neutral state, and Figure10b) -under the influence of fear. As we can see in a neutral emotional state, the distance between the eyebrows (from point 2 to point 1a) is 37 pixels, the distance between the eyebrows in a state of fear -32 pixels. You should also pay attention to the distance from the eyes to the eyebrows. In neutral state, the distance for the left eyebrow is 13 pixels and for the right 8. In a frightened person, this distance is 4 pixels for the right and left eyebrows.The last key point in recognizing the emotion of fear is the corners of the lips.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 11 :</head><label>11</label><figDesc>Image of lips in the color model: a) in the neutral human state; b) in a state of fear</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head>Figure</head><label></label><figDesc>Figure11a) shows the area of the human mouth in a neutral state and it is 21 pixels. In the state of fear person opens his mouth and, accordingly, changes its size. In our case it is 28 pixels (Fig.11b).According to the study, we can generate table 1, which contains data on the change of distance in pixels between the extreme points of the selected elements of the face to facilitate the identification of the image of emotion.</figDesc><graphic coords="7,72.50,297.45,467.00,128.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_15"><head>2. 2 .</head><label>2</label><figDesc>The distance from the extreme left point of one eyebrow to the extreme right point of the second eyebrow decreases. 3.1. The distance from the extreme point of the lower lip to the middle point of the upper lip decreases.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Change of key points positions according to changes of the characteristics of the selected element The distance in pixels from the extreme lower point of the eyebrow to the extreme upper point of the eye decreases.</figDesc><table><row><cell>Emotion</cell><cell>Characteristic</cell><cell>Change key points of elements</cell></row><row><cell>Neutral</cell><cell>1. when a person does not</cell><cell>1.1. Position of a face key elements are in their usual</cell></row><row><cell></cell><cell>react in any way and remains</cell><cell>position</cell></row><row><cell></cell><cell>in its original state</cell><cell></cell></row><row><cell>Surprise</cell><cell>1. dilated eyes</cell><cell>1.1. The distance in pixels increases from the</cell></row><row><cell></cell><cell>2. raised eyebrows</cell><cell>extreme points of the eye.</cell></row><row><cell></cell><cell>3. open mouth (extended)</cell><cell>2.1. The distance in pixels from the extreme lower</cell></row><row><cell></cell><cell></cell><cell>point of the eyebrow to the extreme upper point of</cell></row><row><cell></cell><cell></cell><cell>the eye increases.</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>Research in the field of psychology has shown that the emotional state of all people has common external features. This made it possible to develop a universal classifier of emotions, which can be used to determine person's state. The article proposes an approach to the recognition of visualized human emotions using a pixel color model, which has the ability to adapt to changes in input data less time consuming compared to other existing methods, high speed and low resource usage.</p><p>The proposed approach has practical value in marketing decision-making systems based on the analysis of a human's emotional state while viewing or testing a particular product or service. The article presents an algorithm for step-by-step identification of visualized human emotion based on the comparison of changes in the positions of key points of the selected element in accordance with changes in the characteristics of this element. Further research will focus on the development of automated methods and algorithms for recognizing human emotions, taking into account the physiological characteristics of the human face, gender, age and more. This consideration is necessary and appropriate, because the physiological features of the facial structure of men and women are different: the location of the eyebrows, their width, the shape of the nose, the shape of the lips and their thickness.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network</title>
		<author>
			<persName><forename type="first">S</forename><surname>Minaee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Minaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Abdolrashidi</surname></persName>
		</author>
		<idno type="DOI">10.3390/s21093046</idno>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page">3046</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Facial expression recognition based on local binary patterns: A comprehensive study</title>
		<author>
			<persName><forename type="first">S</forename><surname>Caifeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mcowan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Image and vision Computing</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="803" to="816" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Recognizing facial expression: machine learning and application to spontaneous behavior</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bartlett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Littlewort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lainscsek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Fasel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Movellan</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2005.297</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;05</title>
				<meeting>IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;05<address><addrLine>San Diego, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="568" to="573" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Methods and tools of face recognition for the marketing decision making</title>
		<author>
			<persName><forename type="first">I</forename><surname>Spivak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Krepych</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Faifura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Spivak</surname></persName>
		</author>
		<idno type="DOI">10.1109/PICST47496.2019.9061229</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE International Scientific-Practical Conference: Problems of Infocommunications Science and Technology, PICS&amp;T &apos;19</title>
				<meeting>IEEE International Scientific-Practical Conference: Problems of Infocommunications Science and Technology, PICS&amp;T &apos;19<address><addrLine>Kyiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="212" to="216" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">The system of recognition of facial expressions of human emotions using a multilayer perceptron</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kovalenko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="76" to="81" />
		</imprint>
		<respStmt>
			<orgName>Nat. Lviv Polytechnic University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Visn</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Modeling and recognition of facial expressions of emotions on a person&apos;s face</title>
		<author>
			<persName><forename type="first">G</forename><surname>Yefimov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="532" to="542" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">OpenFace: An open source facial behavior analysis toolkit</title>
		<author>
			<persName><forename type="first">T</forename><surname>Baltrusaitis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Morency</surname></persName>
		</author>
		<idno type="DOI">10.1109/WACV.2016.7477553</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE Winter Conference on Applications of Computer Vision, WACV &apos;16</title>
				<meeting>IEEE Winter Conference on Applications of Computer Vision, WACV &apos;16<address><addrLine>Lake Placid, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="167" to="171" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Rethinking the inception architecture for computer vision</title>
		<author>
			<persName><forename type="first">C</forename><surname>Szegedy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vanhoucke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ioffe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shlens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wojna</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2016.308</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;16</title>
				<meeting>IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;16<address><addrLine>Las Vegas, NV, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="2818" to="2826" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><surname>Sh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2016.90</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;16</title>
				<meeting>IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR &apos;16<address><addrLine>Las Vegas, NV, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Methods and means of expert evaluation of software systems on the basis of interval data analysis</title>
		<author>
			<persName><forename type="first">I</forename><surname>Spivak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Krepych</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Budenchuk</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCSET.2018.8336178</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 14 th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET &apos;18</title>
				<meeting>14 th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET &apos;18<address><addrLine>Lviv-Slavske, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="101" to="127" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Facial emotion recognition using convolutional neural networks (FERC)</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehendale</surname></persName>
		</author>
		<idno type="DOI">10.1007/s42452-020-2234-1</idno>
	</analytic>
	<monogr>
		<title level="j">SN Applied Sciences</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Facial Emotion Recognition Using NLPCA and SVM</title>
		<author>
			<persName><forename type="first">V</forename><surname>Chirra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Uyyala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kolli</surname></persName>
		</author>
		<idno type="DOI">10.18280/ts.360102</idno>
	</analytic>
	<monogr>
		<title level="j">Traitement du Signal</title>
		<imprint>
			<biblScope unit="page" from="13" to="22" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Facial expression recognition using modified Viola-John&apos;s algorithm and KNN classifier</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Kuldeep</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Joyeeta</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-019-08443-x</idno>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Facial expression recognition utilizing local direction-based robust features and deep belief network</title>
		<author>
			<persName><forename type="first">M</forename><surname>Uddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hassan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Almogren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alamri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Alrubaian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fortino</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2017.2676238</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="page" from="4525" to="4536" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Custom Face Classification Model for Classroom Using Haar-Like and LBP Features with Their Performance Comparison</title>
		<author>
			<persName><forename type="first">S</forename><surname>Adeshina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ibrahim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Teoh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hoo</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics10020102</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
