<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Identification of Modern Facial Emotion Recognition Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Kirill</forename><surname>Smelyakov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>14 Nauky Ave</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksandr</forename><surname>Bohomolov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>14 Nauky Ave</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maksym</forename><surname>Kizitskyi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>14 Nauky Ave</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anastasiya</forename><surname>Chupryna</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>14 Nauky Ave</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Identification of Modern Facial Emotion Recognition Models</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A9D307B70B4734339F19ABE562F88DE0</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Computer vision</term>
					<term>facial emotion recognition</term>
					<term>face recognition</term>
					<term>convolutional neural network</term>
					<term>transfer learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper is devoted to the problem of developing a generalized algorithm for the effective identification of computational intelligence models used to recognize emotions by a person's facial expression. To solve this problem, an actual dataset was selected, alternative recognition models, algorithms and machine learning technologies were identified, as well as performance indicators and metrics that are used in the course of a comparative analysis of the obtained results. A series of numerous experiments has been carried out in relation to the identification of the parameters of alternative models of neural networks that are used to recognize emotions and evaluate the effectiveness of their application. Based on a comparative analysis of the effectiveness of the results of experiments, a generalized algorithm for identifying emotions was formulated, as well as recommendations for the use of certain architectures of neural networks in the framework of the tasks of facial emotion recognition.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Facial emotion recognition (FER) is a quite new and fast growing area of computer vision. Its main task is to identify what kind of emotion a person feels, using his/her facial expression. As an area of computer vision, the use of neural networks looks quite promising for this task. Because convolutional networks show, good results in other tasks. Many of the public networks are pretrained. Therefore, the question of using transfer learning for FER task arises. This will reduce the uncertainty of researchers when choosing a machine learning model and significantly speed up and increase the effectiveness of experiments in the field of FER. Therefore, the issue of transferring skills from other problem areas is rather little studied and promising.</p><p>The aim of the work is to research the possibility of neural networks and transfer learning technology applying to FER problems.</p><p>The goals of the work are to choose a dataset, based on it, plan and perform experiments, the results of which will allow:</p><p>• to formulate an effective algorithm for neural network identification and usage within the framework of the FER task; • to determine which architecture of neural networks is better to use as a backbone for FER tasks in different situations; • to compare the effectiveness of face recognition based backbones with standard solutions for transfer learning.</p><p>COLINS-2022: 6th International Conference on Computational Linguistics and Intelligent Systems, May 12-13, 2022, Gliwice, Poland EMAIL: kyrylo.smelyakov@nure.ua (K. Smelyakov); oleksandr.bohomolov@nure.ua (O. Bohomolov); maksym.kizitskyi@nure.ua (M. Kizitskyi); anastasiya.chupryna@nure.ua (A. Chupryna) ORCID: 0000-0001-9938-5489 (K. Smelyakov); 0000-0002-9539-8888 (O. Bohomolov); 0000-0001-9771-5771 (M. Kizitskyi); 0000-0003-0394-9900 (A. Chupryna)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Researches in recent years focus on facial emotion recognition (FER) task <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref>. Such systems often supplement to face recognition systems (Azure Face API, Face, FaceReader, etc.) <ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref> and can be used in many situations, from customer satisfaction analysis, service at the checkout, to tracking emotions at a psychologist's appointment <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>, in perspective drone vision services <ref type="bibr" target="#b8">[9]</ref>, etc.</p><p>The most efficient approaches that use such networks as ResNet, AffectNet, MobileNet, etc. on facial emotion recognition (FER) task are described by researchers. To simplify the access to this information they organized special list <ref type="bibr" target="#b9">[10]</ref>.</p><p>On the other hand, it includes various forms of ensembling and stacking of neural networks. It gives a win in the quality of the classification of emotions, but this approach also has disadvantages. Firstly, the model itself becomes quite huge and heavy, and a lot of time is spent on predictions. Because of this, the application of models of this kind is very complicated on mobile devices or in real-time systems. Secondly, due to the presence of several neural networks, the process of maintaining them within the production system becomes more complicated, and the task of updating models while maintaining the logic of work becomes more difficult compared to a solution in the form of an end-toend model. Therefore, the issue of developing a model, perhaps not as effective, but much more compact and easy to maintain for use in face recognition systems, remains relevant and open.</p><p>At the same time, a wide variety of machine learning models and algorithms, as well as a high degree of uncertainty in the application conditions, often create great difficulties in choosing an appropriate network architecture and tuning its parameters effectively <ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref>.</p><p>Why are neural networks and transfer learning considered to solve FER problems?</p><p>In recent years, neural networks have become the standard tool in the area of computer vision <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. A large number of diverse architectural solutions (EfficientNet, ResNet, Yolov5, etc.) and machine learning methods have been proposed to solve the problems of image classification object detection, and recognition. Their performance is affected by the quality of images <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>, result of image segmentation <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>, the architecture and hyperparameter settings of neural networks <ref type="bibr" target="#b19">[20]</ref>. Moreover, researches on the application of convolutions are carried out to improve the effectiveness of CNN application in the case of the optimization of convolution mask parameters, the number of layers and a number of other parameters <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>.</p><p>For the purposes of identifying the parameters of a neural network, a wide range of machine learning algorithms is currently used. One of the most effective is transfer learning <ref type="bibr" target="#b22">[23]</ref>. Transfer learning (finetuning a neural network with pre-trained weights on a huge data set (for example ImageNet) to solve a specific problem) is widely used in all areas of computer vision and increases the quality of solving different kinds of problems <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>.</p><p>The main advantage of this approach is that, thanks to the pre-trained weights, the model transforms the input image into a smaller set of meaningful features. Because of this, the relief of the loss function is smoothed out and the models converge faster to its minimum. Also recently, in such a field as face recognition, SOTA technique is often used, where the model is trained to compress the image into a feature vector by which a person's face can be identified <ref type="bibr" target="#b25">[26]</ref>. Which in turn is very similar to what transfer learning is used for. That's why we decided to compare classical transfer learning models with face recognition models in more detail. Besides, this domain was selected because this is quite a popular area and many pre-trained models are in the public access <ref type="bibr" target="#b26">[27]</ref>.</p><p>For models to benefit from pre-trained weights, the task must be related to the domain on which the models were trained.</p><p>The research results are important not only for FER services, but also for solving a great number of related tasks, including the development of effective integrated E-learning services, AI solutions <ref type="bibr" target="#b27">[28]</ref>, the development of ICT solutions, network solutions and security services <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. In addition, if face recognition based models show advantages over standard approaches, it means that the use of face recognition learning approaches can improve the quality of transfer learning models in other areas, increase learning speed and allow using less data for training. And it will allow specialists to conduct more experiments and reduce the outlay of cloud learning services.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods and Materials</head><p>First of all, consider the data that will be used in further experiments, some other materials and methods proposed to solve the problem under consideration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Dataset Description</head><p>In order to test our approach, we chose a quite well known data set FER2013 <ref type="bibr" target="#b30">[31]</ref>. The 2013 Facial Expression Recognition dataset (FER2013) is a Kaggle dataset, introduced by Pierre-Luc Carrier and Aaron Courvill at the International Conference on Machine Learning (ICML) in 2013.</p><p>This dataset was chosen because it is in the public access. It also contains photographs of people of different age, gender, race, nationality, with different background and accessories (such as glasses, masks). It allows a better evaluation of the generalization abilities in emotions recognizing.</p><p>This dataset contains grayscale images of faces. Their size is 48x48 pixels. These images have been created using an automatic face registration so that the faces on them are centered and occupy nearly the same amount of space in each image. So when making a comparison, we assume that the images have already been preprocessed in advance, therefore we will not consider this issue within the framework of our paper. Each image is labeled with one of seven emotions from the following list: Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral.</p><p>The Disgust expression has the minimal number of images -547, while other labels have nearly 5,000 samples each. More detailed information is presented in Table <ref type="table" target="#tab_0">1</ref>. Figure <ref type="figure" target="#fig_0">1</ref> shows examples of randomly selected pictures from the data set. As we can see, both men and women of different ages (from babies to old people), of different nationalities and races are represented in the data set.  <ref type="bibr" target="#b30">[31]</ref> In general, this data set provides a wide variety of face images, which will favorably affect the generalization ability of the model. However, it also has an imbalance of classes that is why the accuracy of determining the emotion of disgust will probably be lower in comparison with others.</p><p>To split the data set, a standard function from the sklearn package, train_test_split, was used. Training dataset -70% (25,121 images). Validation dataset -10% (3,589 image). Test dataset -20% (7,177 images). The partition was stratified by emotion in the image with random_state = 42.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Methods</head><p>We have chosen as key metrics:</p><p>• accuracy on the training set;</p><p>• loss on the training set;</p><p>• accuracy on the validation set;</p><p>• accuracy on the validation set;</p><p>• mean convergence rate (MCR)</p><formula xml:id="formula_0">𝑀𝐶𝑅 = 1 𝑛 ∑ 𝑀𝑒𝑡𝑟𝑖𝑐 𝑖 − 𝑀𝑒𝑡𝑟𝑖𝑐 𝑖−1 𝑛 𝑖=1 ,<label>(1)</label></formula><p>where nnumber of epochs; 𝑀𝑒𝑡𝑟𝑖𝑐 𝑖performance metric on train data set during i`s epoch;</p><formula xml:id="formula_1">• mean overfitting rate (MOFR) 𝑀𝑂𝐹𝑅 = 1 𝑛 ∑ (𝑀𝑒𝑡𝑟𝑖𝑐_𝑡𝑟𝑎𝑖𝑛 𝑖 − 𝑀𝑒𝑡𝑟𝑖𝑐_𝑣𝑎𝑙 𝑖 ) − (𝑀𝑒𝑡𝑟𝑖𝑐_𝑡𝑟𝑎𝑖𝑛 𝑖−1 − 𝑛 𝑖=1 𝑀𝑒𝑡𝑟𝑖𝑐_𝑣𝑎𝑙 𝑖−1 ),<label>(2)</label></formula><p>where nnumber of epochs; 𝑀𝑒𝑡𝑟𝑖𝑐_𝑡𝑟𝑎𝑖𝑛 𝑖performance metric on train data set during i`s epoch; 𝑀𝑒𝑡𝑟𝑖𝑐_𝑣𝑎𝑙 𝑖performance metric on validating data set during i`s epoch;</p><p>• initial accuracyaccuracy after training for 1 epoch. We chose this metric because it shows how well the pre-trained weights of the model fit the domain; • initial lossloss after training for 1 epoch. In our experiment Metric will be accuracy and loss (categorical cross entropy).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiment</head><p>This section presents the plan of the experiment. In order to evaluate the effectiveness of transfer learning, we will compare several popular architectures such as VGG-Face (Figure <ref type="figure" target="#fig_1">2</ref>), OpenFace (Figure <ref type="figure" target="#fig_2">3</ref>) which are neural networks trained for face recognition. Our hypothesis is that since the task of face recognition is in some way similar to FER, therefore, the weights of the networks will already contain the necessary features that will increase the learning performance. We also chose ResNet-50, MobileNet (Figure <ref type="figure" target="#fig_3">4</ref>) pretrained on ImageNet dataset because they are the standard choice as a backbone in transfer learning. In these networks, the last layer was excluded, and all layers except the last 4 were frozen.   The model structure of VGG-Face and OpenFace were loaded using deepface library <ref type="bibr" target="#b34">[35]</ref>. The pretrained weights are available on <ref type="bibr" target="#b35">[36]</ref><ref type="bibr" target="#b36">[37]</ref><ref type="bibr" target="#b37">[38]</ref>. ResNet-50, MobileNet were loaded using keras framfork <ref type="bibr" target="#b38">[39]</ref>. Each model will be trained with a fixed set of hyperparameters such as the learning rate (10-4), the number of epochs is 20. Also, key metrics will be measured every 5 epochs. As a loss function we chose categorical cross entropy.</p><p>To compare the efficiency of transfer learning, we will train neural networks in 2 versions: with pretrained weights and with randomly initialized weights. This approach will allow us to determine how and at what stages the pre-trained weights affect the efficiency of the model.</p><p>After the experiment we will find out in which model the pre-trained weights give the greatest value compared to random initialization, determine which model converges faster than others, is more resistant to overfitting and shows the highest accuracy.</p><p>Training will be carried out in the Google Colaboratory environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>The results of the experiments are presented in Figures 5 -Figure <ref type="figure">8</ref> and in Tables 2 -Table <ref type="table">5</ref>. The high resolution versions of all images are presented here <ref type="bibr" target="#b39">[40]</ref>. Table <ref type="table" target="#tab_1">2</ref> shows MCR (1) for each model based on accuracy. Table <ref type="table">3</ref> shows MCR for each model based on loss.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">ML Results</head><p>Table <ref type="table">4</ref> shows MOFR (2) for each model based on accuracy. Table <ref type="table">5</ref> shows MOFR for each model based on loss. It shows how much the difference between the metrics on the validation and training sets increases on average over the epoch, that is, how quickly the model over fits.    </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussions</head><p>As a result of the experiment, it was revealed that pre-trained models showed better performance than randomly initialized ones in the FER task. Also, the pre-trained models had a higher average convergence rate at the first epochs (1-10), but then values became the same, in some cases, at epochs 15-20, the randomly initialized model converged faster. This is mainly due to the fact that the pretrained model at that moment reached an accuracy of more than 0.8 and, accordingly, the quality increase slowed down. On the other hand, pre-trained models are more prone to overfitting, therefore, when using them, it is desirable to apply various regularization methods or data augmentation.</p><p>The best model in terms of initial and final accuracy on the validation set is VGGFace_pretrained. Therefore, its weights are initially best suited for the FER task. But in our experiment, this model had the worst performance in terms of convergence rate. That is why, for its training, other hyperparameters should be used, for example, to increase the learning rate or add more dense classification layers.</p><p>The second model for face detection -OpenFace -shows a level of accuracy comparable to the standard solutions in transfer learning -ResNet-50. But at the same time it has fewer parameters, therefore it fits and predicts faster. OpenFace has 3,743,280 parameters and ResNet-50 has 23,587,712 parameters. MobileNet has the fewest parameters <ref type="bibr" target="#b2">(3,</ref><ref type="bibr">228,</ref><ref type="bibr">864)</ref>, but it`s performance is lower than in OpenFace. Also, OpenFace has the highest convergence rate and overfitting rate in comparison with other models.</p><p>Thus, the face recognition based models proved to be at a fairly high level, in some cases even surpassing standard models like ResNet-50 and MobileNet.</p><p>As can be seen from Figures 8-14, such emotions as happiness, anger, fear, surprise are best recognized, and disgust is worst of all recognized. This is because this class is the least represented in the dataset. In addition, some pictures are rather controversially labeled (for example, pictures 12-13).</p><p>In these examples neural networks show low confidence in the image class.</p><p>Based on the results of the experiment, the final learning algorithm was developed, which can be suggested to use in FER systems:</p><p>Preprocessing: 1) apply the face detection model to the image. You can use one of the pre-trained models, or train your own;</p><p>2) apply various augmentations to images. This will balance the classes (if the original dataset is unbalanced) and also increase the stability of the model on new data.</p><p>Training: 1) select a backbone model. If speed is more important within the task and there is enough data for training, we recommend choosing OpenFace. If the quality of recognition is more important and there are no enough resources for full model training, choose VGGFace;</p><p>2) freeze all layers of the neural network for training and add fully connected layers on top of them; 3) select hyper parameters and start the learning process with them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>As a result of the research the aim and goals of the work were reached. We formulated an effective algorithm for neural network identification and usage within the framework of the FER task; determined which architecture of neural networks was better to use as a backbone for FER tasks in different situations; compared the effectiveness of face recognition based backbones with standard solutions for transfer learning.</p><p>We found one of the most popular datasets on FER task -FER-2013. While analyzing its structure we found out that it was quite unbalanced. On the one hand it's a drawback, because models will learn how to distinguish minor class worse. But on the other hand it will show how models will work with real-world datasets that are often unbalanced.</p><p>Then we defined key metrics for analysis of networks performance during learning. Proposed metrics showed the efficiency of transfer learning for each architecture and determined what pre trained weights are most suitable for FER task and lead to faster convergence and less overfitting speed.</p><p>As part of this work, we organized an experiment and conducted a comparative analysis of the quality of the most popular neural network architectures for transfer learning (ResNet-50, MobileNet) with networks for face recognition (OpenFace, VGG-Face) within the FER task using various metrics. The obtained results show only general performance of the networks because they were all trained under the same conditions, and the best set of hyperparameters was not selected.</p><p>Based on the analysis of the experimental results, we recommend using the algorithm proposed in this article with a pretrained VGGFace. Also, under the condition of limited resources and the use of regularization methods, we recommend OpenFace as an alternative. But we also recommend specifically setting up the classifier for each specific task separately, because this will give a gain in quality.</p><p>For a deeper analysis of the effectiveness of neural networks, it is necessary to perform a deeper study, which is not the purpose of this work. It includes testing a larger class of architectures on a larger number of data sets, using various types of classifiers for embedding (including those not based on neural networks).</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Examples of images<ref type="bibr" target="#b30">[31]</ref> </figDesc><graphic coords="3,163.08,499.56,268.92,256.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: VGG-Face architecture<ref type="bibr" target="#b31">[32]</ref> </figDesc><graphic coords="4,85.20,581.64,424.56,155.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: OpenFace architecture [33]</figDesc><graphic coords="5,149.04,72.00,296.76,209.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: MobileNet-50 architecture<ref type="bibr" target="#b33">[34]</ref> </figDesc><graphic coords="5,169.92,308.88,255.12,281.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figures 5 -Figure 5 :Figure 6 :Figure 7 :Figure 8 :</head><label>55678</label><figDesc>Figures 5 -Figure 8 show the accuracy and loss during training for MobileNet, OpenFace, ResNet-50, VGG-Face models. Each graph shows the results of the pretrained model (straight line) and the randomly initialized model (dashed line).</figDesc><graphic coords="6,72.00,224.04,459.60,216.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figures 9 -</head><label>9</label><figDesc>Figures 9 -Figure 10 show accuracy and loss of the models after the first epoch of training.</figDesc><graphic coords="9,73.08,534.84,448.80,191.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Initial accuracy</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 10 Figures 11 -Figure 11 :Figure 12 :Figure 13 :Figure 14 :Figure 15 :Figure 16 :Figure 17 :</head><label>101111121314151617</label><figDesc>Figure 10: Initial loss</figDesc><graphic coords="10,77.04,72.00,441.00,171.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,72.00,72.00,453.00,232.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,74.04,358.20,447.00,256.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Number of pictures for each class</figDesc><table><row><cell>Labels</cell><cell>Angry</cell><cell cols="2">Surprise Disgust</cell><cell cols="2">Neutral Happy</cell><cell>Fear</cell><cell cols="2">Sadness Total</cell></row><row><cell>Number of</cell><cell>4953</cell><cell>4002</cell><cell>547</cell><cell>6198</cell><cell>8989</cell><cell>5121</cell><cell>6077</cell><cell>35887</cell></row><row><cell>examples</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Accuracy based MCR</figDesc><table><row><cell>Model</cell><cell>Epoch 1-5</cell><cell cols="3">Epoch 6-10 Epoch 11-15 Epoch 16-20</cell></row><row><cell>MobileNet_pretrained</cell><cell>0,039937</cell><cell>0,02512</cell><cell>0,019603</cell><cell>0,016789</cell></row><row><cell>MobileNet_random</cell><cell>0,013814</cell><cell>0,021909</cell><cell>0,023413</cell><cell>0,025798</cell></row><row><cell>openface_pretrained</cell><cell>0,080035</cell><cell>0,014538</cell><cell>0,003204</cell><cell>0,001637</cell></row><row><cell>openface_random</cell><cell>0,011508</cell><cell>0,001289</cell><cell>0,002027</cell><cell>0,0018</cell></row><row><cell>ResNet50_pretrained</cell><cell>0,063678</cell><cell>0,026667</cell><cell>0,008366</cell><cell>0,001567</cell></row><row><cell>ResNet50_random</cell><cell>0,052546</cell><cell>0,045559</cell><cell>0,023246</cell><cell>0,007233</cell></row><row><cell>vggfase_pretrained</cell><cell>0,001372</cell><cell>0,000111</cell><cell>0,000293</cell><cell>0,000453</cell></row><row><cell>vggfase_random</cell><cell>0</cell><cell>0</cell><cell>0</cell><cell>0</cell></row><row><cell>Table 3</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Loss based MCR</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Model</cell><cell>Epoch 1-5</cell><cell cols="3">Epoch 6-10 Epoch 11-15 Epoch 16-20</cell></row><row><cell>MobileNet_pretrained</cell><cell>-0,09537</cell><cell>-0,05194</cell><cell>-0,04445</cell><cell>-0,03934</cell></row><row><cell>MobileNet_random</cell><cell>-0,0492</cell><cell>-0,03906</cell><cell>-0,05088</cell><cell>-0,06358</cell></row><row><cell>openface_pretrained</cell><cell>-0,20339</cell><cell>-0,041</cell><cell>-0,00892</cell><cell>-0,00399</cell></row><row><cell>openface_random</cell><cell>-0,0145</cell><cell>-0,00418</cell><cell>-0,0031</cell><cell>-0,0008</cell></row><row><cell>ResNet50_pretrained</cell><cell>-0,14797</cell><cell>-0,06965</cell><cell>-0,0328</cell><cell>-0,01256</cell></row><row><cell>ResNet50_random</cell><cell>-0,12636</cell><cell>-0,11673</cell><cell>-0,06173</cell><cell>-0,02249</cell></row><row><cell>vggfase_pretrained</cell><cell>-0,00415</cell><cell>-0,00199</cell><cell>-3,9E-05</cell><cell>-0,00179</cell></row><row><cell>vggfase_random</cell><cell>-0,00034</cell><cell>-9,1E-05</cell><cell>-4E-05</cell><cell>-6,1E-05</cell></row><row><cell>Table 4</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Accuracy based MOFR</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Model</cell><cell>Epoch 1-5</cell><cell>Epoch 6-10</cell><cell cols="2">Epoch 11-15 Epoch 16-20</cell></row><row><cell>MobileNet_pretrained</cell><cell>0,033082</cell><cell>0,042063</cell><cell>0,007035</cell><cell>0,026356</cell></row><row><cell>MobileNet_random</cell><cell>0,017102</cell><cell>0,017952</cell><cell>0,018843</cell><cell>0,021943</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Dominant and Complementary Emotion Recognition from Still Images of Faces</title>
		<author>
			<persName><forename type="first">J</forename><surname>Guo</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2018.2831927</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="26391" to="26403" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Weakly Supervised Emotion Intensity Prediction for Recognition of Emotions in Images</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Xu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TMM.2020.3007352</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Multimedia</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="2033" to="2044" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Multisource Transfer Learning for Cross-Subject EEG Emotion Recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y. -Y</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C. -L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCYB.2019.2904052</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Cybernetics</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="3281" to="3293" />
			<date type="published" when="2020-07">July 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The Efficiency of Images Reduction Algorithms with Small-Sized and Linear Details</title>
		<author>
			<persName><forename type="first">K</forename><surname>Smelyakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Datsenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Skrypka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Akhundov</surname></persName>
		</author>
		<idno type="DOI">10.1109/PICST47496.2019.9061250</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Scientific-Practical Conference Problems of Infocommunications</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="745" to="750" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Review of Face Recognition Technology</title>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Mu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3011028</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="139110" to="139120" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards Age-Invariant Face Recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Feng</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2020.3011426</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="474" to="487" />
			<date type="published" when="2022-01-01">1 Jan. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Emotion Recognition System from Speech and Visual Information based on Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">N. -C</forename><surname>Ristea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Duţu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radoi</surname></persName>
		</author>
		<idno type="DOI">10.1109/SPED.2019.8906538</idno>
	</analytic>
	<monogr>
		<title level="m">2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Multi-Classifier Speech Emotion Recognition System</title>
		<author>
			<persName><forename type="first">P</forename><surname>Partila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tovarek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Voznak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rozhon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sevcik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Baran</surname></persName>
		</author>
		<idno type="DOI">10.1109/TELFOR.2018.8612050</idno>
	</analytic>
	<monogr>
		<title level="m">26th Telecommunications Forum (TELFOR)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Implementation of combined method in constructing a trajectory for structure reconfiguration of a computer system with reconstructible structure and programmable logic // Selected Papers of the XIX International Scientific and Practical Conference &quot;Information Technologies and Security</title>
		<author>
			<persName><forename type="first">V</forename><surname>Tokariev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Tkachov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ilina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Partyka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Processing</title>
				<imprint>
			<date type="published" when="2019-11">Nov, 2019</date>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="71" to="81" />
		</imprint>
	</monogr>
	<note>ITS 2019)</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="https://paperswithcode.com/task/facial-expression-recognition" />
		<title level="m">Facial Expression Rec</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Efficiency of image convolution</title>
		<author>
			<persName><forename type="first">K</forename><surname>Smelyakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shupyliuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Martovytskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tovchyrechko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ponomarenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/CAOL46282.2019.9019450</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 8th International Conference on Advanced Optoelectronics and Lasers (CAOL)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="578" to="583" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Nguyen</surname></persName>
		</author>
		<idno type="DOI">10.1109/COMST.2020.3024783</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Communications Surveys &amp; Tutorials</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="553" to="595" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Lattice: A Vision for Machine Learning, Data Engineering, and Policy Considerations for Digital Agriculture at Scale</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chaterji</surname></persName>
		</author>
		<idno type="DOI">10.1109/OJCS.2021.3085846</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Open Journal of the Computer Society</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="227" to="240" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Emotion Recognition Based On CNN</title>
		<author>
			<persName><forename type="first">G</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Meng</surname></persName>
		</author>
		<idno type="DOI">10.23919/ChiCC.2019.8866540</idno>
	</analytic>
	<monogr>
		<title level="m">2019 Chinese Control Conference (CCC)</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="8627" to="8630" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Artificial Intelligence Image Recognition Method Based on Convolutional Neural Network Algorithm</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tian</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3006097</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="125731" to="125744" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Gradational Correction Models Efficiency Analysis of Low-Light Digital Image</title>
		<author>
			<persName><forename type="first">K</forename><surname>Smelyakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chupryna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hvozdiev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sandrkin</surname></persName>
		</author>
		<idno type="DOI">10.1109/eStream.2019.8732174</idno>
	</analytic>
	<monogr>
		<title level="m">Open Conference of Electrical, Electronic and Information Sciences (eStream)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The Effect of Quality Control on Accuracy of Digital Pathology Image Analysis</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Wright</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Dunn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">G A</forename><surname>Hutchins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Treanor</surname></persName>
		</author>
		<idno type="DOI">10.1109/JBHI.2020.3046094</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="307" to="314" />
			<date type="published" when="2021-02">Feb. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deep Guidance Network for Biomedical Image Segmentation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3002835</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="116106" to="116116" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2018.2840695</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1559" to="1572" />
			<date type="published" when="2019-07-01">1 July 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A Convolutional Neural Network for Learning Local Feature Descriptors on Multispectral Images</title>
		<author>
			<persName><forename type="first">C</forename><surname>Nunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pádua</surname></persName>
		</author>
		<idno type="DOI">10.1109/TLA.2022.9661460</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Latin America Transactions</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="215" to="222" />
			<date type="published" when="2022-02">Feb. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Automatic CNN Compression Based on Hyperparameter Learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Meng</surname></persName>
		</author>
		<idno type="DOI">10.1109/IJCNN52387.2021.9533329</idno>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Parameter Distribution Balanced CNNs</title>
		<author>
			<persName><forename type="first">L</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TNNLS.2019.2956390</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="4600" to="4609" />
			<date type="published" when="2020-11">Nov. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Hyperparameters Tuning of Faster R-CNN Deep Learning Transfer for Persistent Object Detection in Radar Images</title>
		<author>
			<persName><forename type="first">R</forename><surname>Gonzales-Martínez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Machacuay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rotta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chinguel</surname></persName>
		</author>
		<idno type="DOI">10.1109/TLA.2022.9675474</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Latin America Transactions</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="677" to="685" />
			<date type="published" when="2022-04">April 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Toward Deep Transfer Learning in Industrial Internet of Things</title>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Griffith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Golmie</surname></persName>
		</author>
		<idno type="DOI">10.1109/JIOT.2021.3062482</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Internet of Things Journal</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">15</biblScope>
			<biblScope unit="page" from="12163" to="12175" />
			<date type="published" when="2021">1 Aug.1, 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Lung and Pancreatic Tumor Characterization in the Deep Learning Era: Novel Supervised and Unsupervised Learning Approaches</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hussein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kandel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Bolan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Wallace</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Bagci</surname></persName>
		</author>
		<idno type="DOI">10.1109/TMI.2019.2894349</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Medical Imaging</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1777" to="1787" />
			<date type="published" when="2019-08">Aug. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<ptr target="https://arxiv.org/pdf/1804.06655.pdf?source=post_page" />
		<title level="m">Deep Face Recognition: A Survey</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title/>
		<author>
			<persName><surname>Deepfase</surname></persName>
		</author>
		<ptr target="https://github.com/serengil/deepface" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">A Deep Transfer Learning Model for Packaged Integrated Circuit Failure Detection by Terahertz Imaging</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2021.3118687</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="138608" to="138617" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Two-level method of fast ReRouting in softwaredefined networks</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lemeshko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Yeremenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Hailan</surname></persName>
		</author>
		<idno type="DOI">10.1109/INFOCOMMST.2017.8246420</idno>
	</analytic>
	<monogr>
		<title level="m">4th International Scientific-Practical Conference Problems of Infocommunications</title>
				<imprint>
			<publisher>PIC S&amp;T</publisher>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="376" to="379" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Formal representation of knowledge for infocommunication computerized training systems</title>
		<author>
			<persName><forename type="first">I</forename><surname>Shubin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kyrychenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Goncharov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Snisar</surname></persName>
		</author>
		<idno type="DOI">10.1109/INFOCOMMST.2017.8246399</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 4th International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&amp;T)</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="287" to="291" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<ptr target="https://www.kaggle.com/msambare/fer2013" />
		<title level="m">Learn facial expressions from an image</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<ptr target="https://www.researchgate.net/figure/VGG-Face-network-architecture_fig2_319284653" />
		<title level="m">VGG-Face network architecture</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<ptr target="https://www.cs.cmu.edu/~satya/docdir/CMU-CS-16-118.pdf" />
		<title level="m">OpenFace architecture</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<ptr target="https://arxiv.org/pdf/1704.04861.pdf" />
		<title level="m">MobileNet-50 architecture</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><surname>Openface</surname></persName>
		</author>
		<ptr target="http://reports-archive.adm.cs.cmu.edu/anon/2016/CMU-CS-16-118.pdf" />
		<title level="m">A general-purpose face recognition library with mobile applications</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title/>
		<author>
			<persName><surname>Vggf</surname></persName>
		</author>
		<ptr target="https://drive.google.com/file/d/1CPSeum3HpopfomUEK1gybeuIVoeJT_Eo/view" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title/>
		<author>
			<persName><surname>Openface</surname></persName>
		</author>
		<ptr target="https://drive.google.com/file/d/1LSe1YCV1x-BfNnfb7DFZTNpv_Q9jITxn/view" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<ptr target="https://keras.io/api/applications/resnet/#resnet50-function" />
		<title level="m">ResNet and ResNetv2</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<monogr>
		<title/>
		<author>
			<persName><surname>Keras</surname></persName>
		</author>
		<ptr target="https://keras.io/api/applications/mobilenet" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<ptr target="https://docs.google.com/document/d/1Z_S_FpRkv4Xf2cRAqHxo23BUv7aYqtMZ59aJrpvYf-M/edit?usp=sharing" />
		<title level="m">All images</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
