<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">CNN-powered body and face detection for intelligent people counting in Covid-19 restricted places</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Michał</forename><surname>Wieczorek</surname></persName>
							<email>michal_wieczorek@hotmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Applied Mathematics</orgName>
								<orgName type="institution">Silesian University of Technology</orgName>
								<address>
									<addrLine>Kaszubska 23</addrLine>
									<postCode>44-100</postCode>
									<settlement>Gliwice</settlement>
									<country key="PL">Poland</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">26th</orgName>
								<orgName type="department" key="dep2">International Conference Information Society</orgName>
								<orgName type="institution">University Studies</orgName>
								<address>
									<addrLine>April 23</addrLine>
									<postCode>2021</postCode>
									<settlement>Kaunas</settlement>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">CNN-powered body and face detection for intelligent people counting in Covid-19 restricted places</title>
					</analytic>
					<monogr>
						<idno type="ISSN">org 1613-0073</idno>
					</monogr>
					<idno type="MD5">10B4C83AE34C0D5C072908C51021BFB7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Convolutional Neural Network</term>
					<term>Detection</term>
					<term>Recognition</term>
					<term>Image processing</term>
					<term>Deep Learning M. Wieczorek) 0000-0002-5319-3366 (M. Wieczorek)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In the reality of 2021's Covid-19 pandemic there are a lot of government's restrictions made to reduce the virus spread speed in the society. One of the examples of such restrictions are the people per square meter limit in public places and shopping malls. Because manual counting of each person in such places is not possible due to limited time and money resources, this limit restriction is widely abused making the pandemic outbreak more dangerous. To face this problem author has presented a novel face and body detection model for the CCTV (Closed Circuit Television) monitoring systems, that automatically counts the amount of people in the monitored area by the use of Deep Learning.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Object detection is important field of image processing. We can conclude many important operations related to detection and recognition of various objects. Among them very important are body detection and face recognition. We can find many applications of these ideas and a variety of models developed for particular purposes.</p><p>Model presented in Li <ref type="bibr" target="#b0">[1]</ref> used for body pose detection a complex model based on SVM (Support Vector Machine) co-working with PSO (Particle Swarm Optimization). Results have shown that heuristic model was able to detect locations which were further classified by SVM classifier. The idea presented in Chen, Wang, Li, and Hong <ref type="bibr" target="#b1">[2]</ref> was based on repetitive assembling of information from object detection. In Wang, Chen, Zheng, and Li <ref type="bibr" target="#b2">[3]</ref> facial rotation model was based on marking changes compared by proposed classifier. Network construction based on clustering was proposed in Zhao, Luo, Quan, Liu, and Wang <ref type="bibr" target="#b3">[4]</ref>. The idea has introduced cluster-wise processing for body detection. A model proposed in our previous research Woźniak, Wieczorek, Siłka, and Połap <ref type="bibr" target="#b4">[5]</ref> was developed for sensoric data. The idea was oriented on evaluation of numerical information concerning body position read from sensors located on human body. We have developed a model of Recurrent Neural Network. The research presented in Yun, Park, and Cho <ref type="bibr" target="#b5">[6]</ref> is oriented on pose rotation detection, where the main detection part was implemented by using self-supervised learning procedure. Supervised learning was also presented for eye tracking techniques Huang, Chen, Zhou, and Xu <ref type="bibr" target="#b6">[7]</ref>. In Nadeem, Jalal, and Kim <ref type="bibr" target="#b7">[8]</ref> was proposed a model of neural network evaluating actions of body to recognize the state in which human was. The model proposed in Winnicka, Kęsik, Połap, Woźniak, and Marszałek <ref type="bibr" target="#b8">[9]</ref> was oriented on Convolutional Neural Networks working as a part of intelligent home infrastructure, in which actions of humans were evaluated from images. In Barra, Barra, Bisogni, De Marsico, and Nappi <ref type="bibr" target="#b9">[10]</ref> was proposed web-shaped model, where pose detection was based on sampling comparisons. Important part of each training process is data augmentation. From this part we can get better data for training. When initial images are not well fitted for the model we can perform modification to improve the set. In Abayomi-Alli, Damaševičius, Wieczorek, and Woźniak <ref type="bibr" target="#b10">[11]</ref> was proposed a model of based on principal resampling. The images were analyzed and for the key features repetionions were proposed. In Woźniak and Połap <ref type="bibr" target="#b11">[12]</ref> was proposed a composition of neural networks with soft sets. The model was giving classes of detected objects from image, while comparisons and decisions were based on soft set classifier. The model proposed in Bin, Chen, Wei, Chen, Gao, and Sang <ref type="bibr" target="#b12">[13]</ref> was developed by using Graph Convolutional Neural Networks. This complex structure was able to recognize variants of human body structure.</p><p>In this paper a model of a Lightweight Convolutional Neural Network is proposed. The model is focused on fast detection of human upper-body and faces to roughly count the amount of people in the desired area.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Neural Network Architecture</head><p>The final network model was developed with an intention to be as light-weight as possible, maintaining the high accuracy. In order to achieve this the number of layers, as well as, the synapse count was kept at a bare minimum. The final model can be seen in Fig. <ref type="figure" target="#fig_0">1</ref>.</p><p>As presented, the developed CNN (Convolutional Neural Network) contains 3 main convolutional segments followed by max polling. In the first, and third segment the kernel size is set to 5x5. In the middle segment however the filter sizes are varying and are set to 7x7, 5x5 and 3x3 in order to extract as many features as possible. All convolutional layers are followed by the ReLU (Rectified Linear Unit) activation function.</p><p>The pooling layers were set to reduce the image by a factor of 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">System Model</head><p>In presented detection model two neural networks were used for the final prediction. The first one was detecting 2 abstract classes:</p><p>• body,</p><p>• no-body.</p><p>And the second one:</p><p>• face,</p><p>• no-face. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Model's performance</head><p>Presented neural network model has achieved a high accuracy of over 99.94% for both the face and the body detection. Final metrics are shown in Tab. 2 and the training plots can be found in Fig. <ref type="figure" target="#fig_1">2</ref>. The final confusion matrices are in Fig. <ref type="figure" target="#fig_2">3</ref> and Fig. <ref type="figure">4</ref>. Used metrics are:    </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>The results of the final system are shown in Fig. <ref type="figure">6</ref>. In all examples the system was set with the same, default parameters to present the accuracy "out of the box". Because of the dual-CNN model there is a possibility to better fit the parameters to the specific camera and view making the predictions even more accurate.</p><p>In Fig. <ref type="figure">5</ref> there is one of the examples of the system in action. As we can see in most cases the CNN correctly detects human silhouettes on the image and draws a bounding box around them. In some cases, however, especially on the dark background the system has a problem to correctly detect the body and the certainty is too low to recognize this batch as a human. In some other regions of the image (especially on the left top) there are some inverted examples, when batches of the image are falsely recognized as the body. This problems are mostly due to the fact that the model was made to be more general and to work in most common scenarios. To improve the classification and to better fit the specific camera, the fine-tuning of detection parameters, as well as, some post-training would be necessary.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2 Comparison of accuracy with other detection models Detection Model</head><p>Accuracy This work 99.94% Hsu, Abdel-Mottaleb, and Jain <ref type="bibr" target="#b13">[14]</ref> 99.12% Wu, Yin, Wang, and Xu <ref type="bibr" target="#b14">[15]</ref> 98.12% Cuimei, Zhiliang, Nan, and Jianhua <ref type="bibr" target="#b15">[16]</ref> 98.01% Chi, Zhang, Xing, Lei, Li, and Zou <ref type="bibr" target="#b16">[17]</ref> 96.4% Zhang, Chi, Lei, and Li <ref type="bibr" target="#b17">[18]</ref> 96.2% Rowley, Baluja, and Kanade <ref type="bibr" target="#b18">[19]</ref> 90.3%</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>Because of the global Covid-19 pandemic many governments prepared restrictions to the amount of people per square meter in public places such as shops and shopping malls. This approach, except of allowing shops to be run at all, created a huge logistical problem of counting the amount of people inside the desired area. To follow this restrictions every shop owner had to create some way of dealing with this problem. In smaller, regional shops the problem is almost non visible due to the fact that there are only allowed 2-3 people for entire building and the shop owner can easily count the people manually. In bigger shops however, where limits are much bigger, the problem starts to occur because the shop personnel has to serve the clients and have no time and possibility to count every new guest. Some owners deal with it by hiring additional personnel just for this purpose, however it is very expensive and especially in times of global pandemic, when the income is much smaller, this method is far from optimal. Even worse situation is in much bigger places such as shopping malls and other big, public places. Because there are many entrances and exits the manual counting even with additional personnel would be near impossible from the financial and logistical point of view, so in most places government restrictions are violated.</p><p>Presented system was made to address these problems with a minimal financial footprint, allowing shop owners to efficiently control the number of people in the monitored area and because of that respect the official restrictions and actively fight with the pandemic outbreak.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>As we can see presented Lightweight-CNN solution allows the user to quickly count people located in a store, in a shopping mall or in any open public space with small resource, money and power consumption. Because of the high effectiveness of face and body detection the presented system counts the amount of people with high accuracy and due to the light-weighted architecture it does it in real-time even on high resolution cameras. What's more the small architecture allows the system to run even on smaller and less powerful devices such as laptops or, for example, Raspberry Pie. Because of that it reduces the need of more expensive and much more power hungry PCs making it more environment friendly.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Lightweight CNN Architecture</figDesc><graphic coords="3,89.29,84.19,416.69,234.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Training metrics over epochs</figDesc><graphic coords="4,93.89,165.99,100.01,75.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Confusion Matrix Face</figDesc><graphic coords="4,108.88,295.71,187.51,140.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 : 9 Figure 5 : 1 - 9 Figure 6 :</head><label>495196</label><figDesc>Figure 4: Confusion Matrix Body</figDesc><graphic coords="4,108.88,491.06,187.51,140.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="5,89.29,229.27,416.70,275.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Most common machine learning metrics for the final model</figDesc><table><row><cell>• Accuracy,</cell></row><row><cell>• Precision,</cell></row><row><cell>• Recall,</cell></row><row><cell>• F1,</cell></row><row><cell>• Specificity,</cell></row><row><cell>• FDR (False Discovery Rate),</cell></row><row><cell>• FPR (False Positive Rate),</cell></row><row><cell>• FNR (False Negative Rate),</cell></row><row><cell>• FOR (False Omission Rate),</cell></row><row><cell>• NPV (Negative Predictive Value).</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Author would like to acknowledge contribution to this research from the Rector of the Silesian University of Technology, Gliwice, Poland, under program "Initiative of Excellence-Research University" grant no. 08/IDUB/2019/84.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Global face pose detection based on an improved pso-svm method</title>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 International Conference on Aviation Safety and Information Technology</title>
				<meeting>the 2020 International Conference on Aviation Safety and Information Technology</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="549" to="553" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Repetitive assembly action recognition based on object detection and pose estimation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Manufacturing Systems</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="325" to="333" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Fast head pose estimation via rotation-adaptive facial landmark detection for video edge computation</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="45023" to="45032" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Cluster-wise learning network for multi-person pose estimation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Quan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page">107074</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Body pose prediction based on motion sensor data and recurrent neural network</title>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wieczorek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Siłka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Połap</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Industrial Informatics</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="2101" to="2111" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Robust human pose estimation for rotation via self-supervised learning</title>
		<author>
			<persName><forename type="first">K</forename><surname>Yun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="32502" to="32517" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Eye landmarks detection via weakly supervised learning</title>
		<author>
			<persName><forename type="first">B</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page">107076</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Human actions tracking and recognition based on body parts detection via artificial neural network</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nadeem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jalal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2020 3rd International Conference on Advancements in Computational Sciences (ICACS), IEEE</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A multi-agent gamification system for managing smart homes</title>
		<author>
			<persName><forename type="first">A</forename><surname>Winnicka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kęsik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Połap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Marszałek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">1249</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Web-shaped model for head pose estimation: An approach for best exemplar selection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Barra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bisogni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Marsico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nappi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="5457" to="5468" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Data augmentation using principal component resampling for image recognition by deep learning</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">O</forename><surname>Abayomi-Alli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Damaševičius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wieczorek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Soft Computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="39" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Soft trees with neural components as image-processing technique for archeological excavations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Połap</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Personal and Ubiquitous Computing</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="363" to="375" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Structure-aware human pose estimation with graph convolutional networks</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z.-M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X.-S</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page">107410</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Face detection in color images</title>
		<author>
			<persName><forename type="first">R.-L</forename><surname>Hsu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdel-Mottaleb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="696" to="706" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Face detection with different scales based on faster r-cnn</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCYB.2018.2859482</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Cybernetics</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="4017" to="4028" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Human face detection algorithm via haar cascade classifier combined with three additional classifiers</title>
		<author>
			<persName><forename type="first">L</forename><surname>Cuimei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhiliang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jianhua</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">13th IEEE International Conference on Electronic Measurement &amp; Instruments (ICEMI)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="483" to="487" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Selective refinement network for high performance face detection</title>
		<author>
			<persName><forename type="first">C</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="8231" to="8238" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Refineface: Refinement neural network for high performance face detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Z</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2020.2997456</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="page" from="1" to="1" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Neural network-based face detection</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Rowley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Baluja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kanade</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="23" to="38" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
