<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Image Recognition Model Based on a Vector of Uncorrelated Features</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andriy</forename><surname>Fesenko</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Volodymir</forename><surname>Druzhynin</surname></persName>
							<email>volodymirdruzhynin68@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nataliia</forename><surname>Tsopa</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">National Technical University of Ukraine &quot;Igor Sikorsky Kyiv Polytechnic Institute&quot;</orgName>
								<address>
									<addrLine>37, Prospect Beresteiskyi</addrLine>
									<postCode>03056</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vladyslav</forename><surname>Synhaivskyi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Information Technology and Implementation (IT&amp;I-2023)</orgName>
								<address>
									<addrLine>November 20-21</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Image Recognition Model Based on a Vector of Uncorrelated Features</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">6F3CCEFC3BD1350B1BE7DE02B6C0D41B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Сonvolutional neural network</term>
					<term>neural network with multiple outputs</term>
					<term>image recognition</term>
					<term>machine learning</term>
					<term>artificial intelligence</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This article explores the task of image recognition based on multiple unrelated features, using the example of identifying circulating coins through convolutional neural networks. The paper outlines the conventional method of image recognition which employs a standard convolutional neural network with a single output, where each image corresponds to a distinct class. The language used is clear, objective, and value-neutral, and technical term abbreviations are explained upon first use. The structure is clear and logical, with causal connections between statements. The text is free from grammatical errors, spelling mistakes, and punctuation errors, and adheres to American English spelling and grammar conventions. An analysis of the results and methodology suggests that the present architecture is not optimal for datasets with a large number of classes. To enhance recognition accuracy, we advocate for a convolutional neural network architecture with multiple outputs. This method entails branching the network structure into several branches at a particular stage. By employing this neural network type, each image corresponds to a list of various independent characteristics, rather than a solitary composite category. This division creates multiple subtasks for recognizing images, each designated to a distinct branch of the neural network. A comparative analysis of the traditional neural network and a network with multiple outputs was conducted in the study. The identified objective of the study was to assess the pros and cons of each approach and impartially scrutinize the rationale for observed result disparities. The results indicate that implementing a less conventional architecture of a convolutional neural network with multiple outputs can surmount the hindrances connected with a sizable quantity of composite classes and the indistinctness of ascertaining active coins by several traits. This suggested technique broadens the opportunities for utilizing neural networks in image recognition assignments having numerous classes and a scant quantity of training data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today, the convolutional neural network (CNN) is widely considered the most effective method for image recognition. The aim of this study is to create a CNN that can identify images based on a range of unrelated characteristics, as exemplified by its successful identification of circulating coins from various European countries. Additionally, alternative techniques were examined to enhance the precision of image recognition. To achieve this goal, we carefully scrutinized the traditional methodology for image recognition and made modifications to the neural network model that accounted for the unique characteristics of the input data.</p><p>Most features of a coin, such as its denomination, currency unit, country, year of issue, and mint markings, can be determined from the images of its front and back. However, measuring the diameter and weight of the coin necessitates the use of additional tools. Because capturing photographs of the coin's obverse and reverse is effortless and requires minimal equipment, users can recognize coins most conveniently by using these photos to identify their critical characteristics.</p><p>To effectively train a CNN, having a large dataset of labeled images that correspond to specific categories is crucial. Although specialized websites on the internet offer readily available datasets for most image recognition tasks, this research necessitates the manual labeling of images from freely available online sources and personal photographs since there is no available premade dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Analysis of Recent Research</head><p>When addressing image recognition tasks, it is customary to employ a classic approach -a standard convolutional neural network initially developed by Yann LeCun in 1995 <ref type="bibr">[1]</ref>. Such a network accepts an image as its input and generates a vector of numbers that demonstrates the neural network's degree of confidence in associating the image with each designated category. In this approach, images within the training dataset are assigned to distinct classes based on their distinctive and defining features. For identification of circulating coins, it is necessary to assign a unique attribute to each coin image comprising its denomination, currency type, and country. Consequently, a separate "1 coin, Ukraine" category would be created for each image. For European nations, this classification scheme would potentially result in hundreds of such categories. However, due to the dataset's limited number of images, each class would end up with a small number of images.</p><p>A conventional convolutional neural network includes sequential convolution layers with the ReLU activation function and utilizes filters, as shown in Fig. <ref type="figure" target="#fig_13">1 [2]</ref>. As the image is processed, these filters generate feature maps for each filter, which are then sub-sampled by pooling layers to decrease their spatial dimensions <ref type="bibr" target="#b0">[3]</ref>. Image classification is performed at the end of this network utilizing one or more fully connected neural layers. For numerous image recognition tasks, a pre-existing neural network architecture is frequently utilized, with architectures that have won the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC) <ref type="bibr" target="#b1">[4]</ref><ref type="bibr" target="#b2">[5]</ref><ref type="bibr" target="#b3">[6]</ref><ref type="bibr" target="#b4">[7]</ref><ref type="bibr" target="#b5">[8]</ref><ref type="bibr" target="#b6">[9]</ref> often being selected. This challenge assesses algorithms for image recognition and object detection. The VGG16 <ref type="bibr" target="#b7">[10]</ref><ref type="bibr" target="#b8">[11]</ref><ref type="bibr" target="#b9">[12]</ref><ref type="bibr" target="#b10">[13]</ref><ref type="bibr" target="#b11">[14]</ref> and ResNet50 <ref type="bibr" target="#b12">[15]</ref><ref type="bibr" target="#b13">[16]</ref><ref type="bibr" target="#b14">[17]</ref><ref type="bibr" target="#b15">[18]</ref><ref type="bibr" target="#b16">[19]</ref> networks are the most commonly employed, both trained on the ImageNet dataset. The study utilizes Stevo Bozinovski's <ref type="bibr" target="#b17">[20]</ref> transfer learning technique to transfer knowledge from the extensive ImageNet dataset to a smaller one. However, due to the specificity of the input data and limited neural network training resources, this approach does not yield notably improved outcomes. As a result, a tailored neural network design is imperative.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Main Section</head><p>For this study, a dataset was compiled that included over 40,000 photographs of current coins from 36 countries, including Ukraine, Poland, the United Kingdom, and other European Union nations. The dataset features two square images (150x150 pixels) of each coin, showcasing the obverse and reverse sides against either a white or light gray backdrop. All photographs were standardized at 300 pixels wide and 150 pixels high. The dataset was split into two parts. 80% of the total amount was utilized as training data for the neural network. The other 20% of the dataset was used as test data to evaluate the network's performance. Within the training data, 80% was solely reserved for training, while 20% was employed for validation during each epoch of neural network training. This method enabled swift overfitting detection. Table <ref type="table" target="#tab_0">1</ref> displays the distribution of images in the dataset across various countries. The dataset comprised 353 image classes categorized based on the format "country of origincurrency unit -denomination -coin type." The pictures of coins, following a 1:1 aspect ratio, displaying the obverse or reverse side of the coin, were photographed against a white, light gray, light blue, or light yellow background. Depending on the technical specifications of the camera used, as well as the intensity and color of the lighting, the background hue shifted correspondingly. To enlarge the dataset and ensure precise image recognition even under suboptimal conditions, data augmentation was employed. The neural network was developed using the Google Colab environment, which provides ample computational power for creating, training, testing, and deploying neural networks. Additionally, users are able to work with TensorFlow, the open-source machine learning library <ref type="bibr" target="#b18">[21]</ref>, and its high-level API, Keras <ref type="bibr" target="#b19">[22]</ref>, through Google Colab notebooks. Although TensorFlow supports multiple programming languages, Python was chosen for its widespread use in developing neural networks. Given the atypical rectangular format of the input data (300x150 pixels) and narrow standardization in image presentation (both sides of the coin displayed on a white or light gray background), and limited resources available, a customized neural network was chosen for analysis. See Figure <ref type="figure">2</ref> for the code outlining this network.</p><p>This neural network consists of three pairs of convolutional layers, each consisting of 64, 128, and 256 filters that are 3x3 in size, which alternate with max pooling layers. The max pooling layers feature a. Filters of dimensions 2x2 recognize and transfer the highest value to the subsequent layer within the neural network. The network ends with two totally connected layers equipped with the ReLU activation function, with a capacity of 512 and 256 neurons, respectively. Additionally, the neural network includes a single fully connected layer composed of 212 neurons for the purpose of classification. Figure <ref type="figure">3</ref> illustrates the resulting graph of the network. The network's performance results, obtained after training for 10 epochs, are displayed in Figure <ref type="figure" target="#fig_2">4</ref>.   Based on the validation data graphs, it is apparent that increasing the number of epochs leads to overfitting. This happens when the network memorizes the training data, resulting in improved recognition accuracy at the cost of maintaining almost the same accuracy on validation data. The recognition accuracy on test data was 0.9365 (Figure <ref type="figure" target="#fig_4">6</ref>).</p><p>Numerous experiments were performed on the neural network in an attempt to boost its complexity by adding more layers and neurons throughout the project. Regrettably, these adjustments were determined to have negligible impact on the recognition accuracy of test data, which persevered within the range of 93-94%. On the other hand, there was a significant surge in both computational resources demand and training time. To improve recognition accuracy, the possibilities include enlarging the image dataset through image augmentation or using an alternative neural network architecture. A probable solution, considering the specificity of the data, would be to adopt a convolutional neural network with multiple outputs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Overview of the neural network with multiple outputs</head><p>When examining why traditional convolutional neural networks present relatively low accuracy rates, an unusual aspect of the data comes to light. The current definition of traits of circulating coins, specifically "denomination, currency unit, country," only allows for limited image categorization, resulting in a substantial number of classes (212, specifically). Within each category, there is a paucity of images, and having a restricted number of images across numerous categories leads to a decline in training precision. After further examination, it is clear that each coin classification comprises three distinct characteristics: denomination, currency unit, and country of origin. None of these traits alone can provide enough information for definitive identification of a specific coin. Focusing exclusively on denomination, for example, can result in significantly fewer classifications for the same number of training images. Expanding the dataset to include more images within each category enhances recognition accuracy for that particular feature. The accuracy of identifying coins based solely on their currency or country of origin is similarly increased. Additionally, augmenting the dataset by incorporating images of modern circulating coins from another country to the combined feature may require adding approximately ten new classes. However, the list of country classes would only require one new class, while zero to two new classes may be added for currency unit and denomination classes. When working with a dataset that includes many composite classes, using a convolutional neural network with multiple outputs <ref type="bibr" target="#b20">[23]</ref> can prove to be beneficial. This architecture receives an input image and generates several correspondence vectors, each corresponding to a different classification. To achieve this, the neural network branches at a specific point, resulting in the creation of multiple branches. This branching can occur at any stage of the neural network (refer to Figure <ref type="figure" target="#fig_5">7</ref>). This network architecture is commonly used to collect data from different formats that demand distinct processing methods, incorporating outcomes from classification and regression analysis <ref type="bibr" target="#b21">[24]</ref>. Moreover, it can handle images containing multiple unrelated characteristics. When utilizing a convolutional neural network with multiple outputs for coin recognition, a total of three correspondence vectors can be acquired, with each vector corresponding to a specific denomination, currency unit, and country. Through the implementation of this approach, the recognition accuracy for each feature is significantly improved in comparison to a regular network featuring a solitary output that utilizes compound characteristics as classes, according to source <ref type="bibr" target="#b9">[12]</ref>.</p><p>Another advantage of this approach is the ability to fine-tune the neural network with greater accuracy and conserve resources during training by incorporating pre-trained models as branch layers after branching. This is because disparate feature models may require different amounts of training epochs. In this study, we implemented a neural network that consists of three branches at the beginning, each of which contains a single convolutional model layer. The layer order does not affect the overall neural network, but it enables individual branch training and weight usage in the general network. Figure <ref type="figure" target="#fig_6">8</ref> showcases the code for the general neural network, while Figure <ref type="figure" target="#fig_7">9</ref> depicts its graphical representation.</p><p>The value_model, currency_model, and country_model function as grouped layers and identify the denomination, currency type, and country, respectively, using a structure similar to that of a conventional convolutional neural network model (refer to Figure <ref type="figure">2</ref>). Technical term abbreviations are explained upon first use. The language is clear, concise, and objective, avoiding biased or figurative language and maintaining formal register. The text adheres to common academic structure and citation styles, and is free of grammatical and spelling errors. However, they differ by having fewer neurons in the final classification layer: eight for the denomination and currency models and twenty-six for the country model.  The models were trained separately, with consistent distribution of data into training and test sets. However, each model had a varying distribution of data for training and validation. Figures <ref type="figure" target="#fig_13">10, 12</ref>, and 14 illustrate accuracy graphs and loss functions for training and validation data of denomination, currency, and country recognition models, respectively. The denominations were recognized with an accuracy of 0.9941, the currency unit with an accuracy of 0.9989, and the country with an accuracy of 0.9641, as shown in Figures <ref type="figure" target="#fig_13">11, 13</ref>, and 15. The graphs show when each model begins to overfit, allowing determination of the necessary number of training epochs for individual models. Analyzing the test data reveals high accuracy in denomination and currency recognition, roughly 99%, while country recognition accuracy is lower, approximately 96-97%. This lower precision may be attributed to the larger number of input data categories and characteristics. Specifically, the presence of 2 euro commemorative coins in the dataset complicates identifying their country of origin within the European Union. Separating denomination, currency, and country recognition leads to higher accuracy than traditional networks. In addition, accurately determining denomination and currency is highly likely. The final accuracy, calculated by multiplying branch results, is 0.9574, surpassing traditional convolutional neural network accuracy by 0.0209. This improvement shows promise. Improving flexibility in configuring and training individual models within a three-output convolutional neural network can lead to enhanced results.     </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>The recognition of coins through images presents a challenging task due to its reliance on distinguishing unrelated characteristics. However, this allows for the implementation of various approaches in classifying images and constructing convolutional neural networks. Our study proposes a traditional approach where each image belongs to a single composite class, identifying its denomination, currency, and country. The neural network yields one output. Additionally, a method was presented that utilizes a neural network with various outputs. Each image feature is classified separately, and the neural network has three outputs, each corresponding to one of them.</p><p>Due to the large number of small-volume image classes in the conventional approach, the recognition result was suboptimal at 93-94%. Nevertheless, branching the neural network into three separate branches for each characteristic resulted in significant improvement, with a success rate of 99% for denomination and currency, and 95-96% for the country. A noteworthy advantage of this approach is the option to flexibly adjust and train separate models for each characteristic and incorporate them as layers of separate branches of the neural network. One drawback of this approach is the significant increase in training time and resource use due to the multiple-fold increase in the number of neural network parameters. Nonetheless, training the embedded models separately can decrease the amount of resources utilized concurrently during network training.</p><p>Thus, the proposed approach to recognizing circulating coins differs from classical methods using convolutional neural networks. Not only is a narrow classification into one complex class considered, but also the separation of each class into three separate characteristics. This approach solves the problem of complex classes and expands the possibilities of using limited training data, providing more accurate recognition. A branched architecture of a convolutional neural network with multiple outputs is proposed, which takes into account each characteristic separately, corrects the limitations of classical models, and allows to work effectively with small and complex datasets. Towards Data Science. 2020. Available online: https://towardsdatascience.com/convolutional-neuralnetworks-explained-9cc5188c4939.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">References</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Typical Convolutional Neural Network.</figDesc><graphic coords="2,73.25,382.72,448.50,179.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :Figure 3 :</head><label>23</label><figDesc>Figure 2: Neural Network Definition</figDesc><graphic coords="4,199.25,245.43,195.75,469.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Neural Network Training Results</figDesc><graphic coords="5,174.50,247.93,245.25,240.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Graph of Recognition Accuracy and Loss Function</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Testing results of the neural network</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Examples of neural networks with multiple outputs</figDesc><graphic coords="6,90.50,413.40,414.00,213.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Definition of a neural network with three outputs</figDesc><graphic coords="7,72.00,420.31,452.70,86.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Graph of the neural network with three outputs</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Training graphs for the denomination recognition model</figDesc><graphic coords="8,150.50,72.00,294.00,255.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Testing results of the denomination recognition model</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 12 :Figure 13 :</head><label>1213</label><figDesc>Figure 12: Training graphs of the currency recognition model</figDesc><graphic coords="8,152.00,398.36,291.00,243.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 15 :</head><label>15</label><figDesc>Figure 15: Testing results of the country recognition model</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 14 :</head><label>14</label><figDesc>Figure 14: Training graphs of the country recognition model</figDesc><graphic coords="9,149.75,72.00,295.00,266.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>[ 1 ]</head><label>1</label><figDesc>LeCun Y. Convolutional Networks for Images, Speech, and Time-Series / Y. LeCun, Y. Benigo. -1995. -14 p. [2] Mishra, M. Convolutional Neural Networks, Explained|by Mayank Mishra|Towards Data Science.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Distribution of coin images by European countries.</figDesc><table><row><cell>№</cell><cell>Country</cell><cell>Number of images</cell><cell>№</cell><cell>Country</cell><cell>Number of images</cell></row><row><cell>1</cell><cell>Austria</cell><cell>1678</cell><cell>19</cell><cell>Monaco</cell><cell>1236</cell></row><row><cell>2</cell><cell>Belgium</cell><cell>2122</cell><cell>20</cell><cell>Netherlands</cell><cell>2066</cell></row><row><cell>3</cell><cell>Bulgaria</cell><cell>414</cell><cell>21</cell><cell>Germany</cell><cell>6942</cell></row><row><cell>4</cell><cell>Vatican</cell><cell>1050</cell><cell>22</cell><cell>Norway</cell><cell>490</cell></row><row><cell>5</cell><cell>United Kingdom</cell><cell>3252</cell><cell>23</cell><cell>Poland</cell><cell>2166</cell></row><row><cell>6</cell><cell>Greece</cell><cell>1550</cell><cell>24</cell><cell>Portugal</cell><cell>1356</cell></row><row><cell>7</cell><cell>Denmark</cell><cell>976</cell><cell>25</cell><cell>Romania</cell><cell>572</cell></row><row><cell>8</cell><cell>Estonia</cell><cell>470</cell><cell>26</cell><cell>San Marino</cell><cell>1454</cell></row><row><cell>9</cell><cell>Ireland</cell><cell>1436</cell><cell>27</cell><cell>Slovakia</cell><cell>650</cell></row><row><cell>10</cell><cell>Iceland</cell><cell>558</cell><cell>28</cell><cell>Slovenia</cell><cell>674</cell></row><row><cell>11</cell><cell>Spain</cell><cell>2336</cell><cell>29</cell><cell>Hungary</cell><cell>1296</cell></row><row><cell>12</cell><cell>Italy</cell><cell>2324</cell><cell>30</cell><cell>Ukraine</cell><cell>2186</cell></row><row><cell>13</cell><cell>Cyprus</cell><cell>702</cell><cell>31</cell><cell>Finland</cell><cell>2280</cell></row><row><cell>14</cell><cell>Latvia</cell><cell>472</cell><cell>32</cell><cell>France</cell><cell>5280</cell></row><row><cell>15</cell><cell>Lithuania</cell><cell>438</cell><cell>33</cell><cell>Croatia</cell><cell>1452</cell></row><row><cell>16</cell><cell>Luxembourg</cell><cell>1942</cell><cell>34</cell><cell>Czech Republic</cell><cell>1380</cell></row><row><cell>17</cell><cell>Malta</cell><cell>926</cell><cell>35</cell><cell>Switzerland</cell><cell>3506</cell></row><row><cell>18</cell><cell>Moldova</cell><cell>686</cell><cell>36</cell><cell>Sweden</cell><cell>1054</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Leonid</forename><surname>Yampolskyi</surname></persName>
		</author>
		<author>
			<persName><surname>Stefanovych</surname></persName>
		</author>
		<title level="m">Neurotechnologies and neurocomputer systems: a textbook</title>
				<editor>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Yampolsky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><forename type="middle">I</forename><surname>Lisovychenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Oliynyk</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">-K</forename></persName>
		</editor>
		<imprint>
			<publisher>Dorado-Druk</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page">576</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="https://image-net.org/challenges/LSVRC/index.php" />
		<title level="m">Electronic resource</title>
				<imprint/>
	</monogr>
	<note>ImageNet</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Long-tailed visual recognition with deep models: A methodological survey and evaluation</title>
		<author>
			<persName><forename type="first">Yu</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Liuyu</forename><surname>Xiang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2022.08.031</idno>
		<ptr target="https://doi.org/10.1016/j.neucom.2022.08.031" />
	</analytic>
	<monogr>
		<title level="m">Guiguang Ding and other</title>
				<imprint>
			<date type="published" when="2022-10-14">14 October 2022</date>
			<biblScope unit="volume">509</biblScope>
			<biblScope unit="page" from="290" to="309" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Application of a convolutional neural network with multiple outputs for recognizing circulating coins</title>
	</analytic>
	<monogr>
		<title level="m">Collection of scientific papers of the Military Institute of Taras Shevchenko National University of Kyiv</title>
				<editor>
			<persName><forename type="first">E</forename><forename type="middle">Y</forename><surname>Vaivala</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><forename type="middle">V</forename><surname>Tsiopa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Shmidke</surname></persName>
		</editor>
		<imprint>
			<biblScope unit="page" from="49" to="58" />
		</imprint>
		<respStmt>
			<orgName>Military Institute of Taras Shevchenko National University of Kyiv</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">ImageNet Large Scale Visual Recognition Challenge</title>
		<author>
			<persName><forename type="first">Jia</forename><surname>Lga ; Russakovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hao</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sanjeev</forename><surname>Krause</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sean</forename><surname>Satheesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhiheng</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aditya</forename><surname>Karpathy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Khosla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><forename type="middle">C</forename><surname>Bernstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Berg</surname></persName>
		</author>
		<author>
			<persName><surname>Fei-Fei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision (IJCV)</title>
		<imprint>
			<biblScope unit="volume">115</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="211" to="252" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">One evolutionary algorithm deceives humans and ten convolutional neural networks trained on ImageNet at image recognition / Ali Osman Topal, Raluca Chitic</title>
		<idno type="DOI">10.1016/j.asoc.2023.110397</idno>
		<ptr target="https://doi.org/10.1016/j.asoc.2023.110397" />
	</analytic>
	<monogr>
		<title level="j">Applied Soft Computing</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page">110397</biblScope>
			<date type="published" when="2023-08">August 2023</date>
		</imprint>
	</monogr>
	<note>Franck Leprévost</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Knowledge driven weights estimation for large-scale few-shot image recognition</title>
		<author>
			<persName><forename type="first">Jingjing</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Linhai</forename><surname>Zhuo</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.epsr.2023.109241</idno>
		<ptr target="https://doi.org/10.1016/j.epsr.2023.109241" />
		<imprint>
			<date type="published" when="2023-10">October 2023</date>
			<biblScope unit="volume">142</biblScope>
			<biblScope unit="page">109668</biblScope>
		</imprint>
	</monogr>
	<note>Zhipeng Wei and other // Pattern Recognition</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Very Deep Convolutional Networks for Large-Scale Image Recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>Simonyan</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1409.1556" />
		<editor>K. Simonyan, A. Zisserman</editor>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page">14</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">High accuracy keyway angle identification using VGG16-based learning method / Soma Sarker, Sree Nirmillo Biswash Tushar</title>
		<idno type="DOI">10.1016/j.jmapro.2023.04.019</idno>
		<ptr target="https://doi.org/10.1016/j.jmapro.2023.04.019" />
	</analytic>
	<monogr>
		<title level="j">Journal of Manufacturing Processes</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="223" to="233" />
			<date type="published" when="2023-07-28">28 July 2023</date>
		</imprint>
	</monogr>
	<note>Heping Chen</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">MANet: A two-stage deep learning method for classification of COVID-19 from Chest X-ray images</title>
		<author>
			<persName><forename type="first">Yujia</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Hak-Keung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guangyu</forename><surname>Lam</surname></persName>
		</author>
		<author>
			<persName><surname>Jia</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2021.03.034</idno>
		<ptr target="https://doi.org/10.1016/j.neucom.2021.03.034" />
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">443</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="96" to="105" />
			<date type="published" when="2021-07">July 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Multi-level residual network VGGNet for fish species classification / Eko Prasetyo, Nanik Suciati, Chastine Fatichah</title>
		<idno type="DOI">10.1016/j.jksuci.2021.05.015</idno>
		<ptr target="https://doi.org/10.1016/j.jksuci.2021.05.015" />
	</analytic>
	<monogr>
		<title level="j">Journal of King Saud University -Computer and Information Sciences</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="5286" to="5295" />
			<date type="published" when="2022-09">September 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Theckedath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Sedamkar</surname></persName>
		</author>
		<idno type="DOI">10.1007/s42979-020-0114-9</idno>
		<ptr target="https://doi.org/10.1007/s42979-020-0114-9" />
	</analytic>
	<monogr>
		<title level="j">SN COMPUT. SCI</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">79</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Deep Residual Learning for Image Recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Sun</title>
		<imprint>
			<biblScope unit="page">12</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A transfer convolutional neural network for fault diagnosis based on ResNet-50</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gao</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00521-019-04097-w</idno>
		<ptr target="https://doi.org/10.1007/s00521-019-04097-w" />
	</analytic>
	<monogr>
		<title level="j">Neural Comput &amp; Applic</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="6111" to="6124" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Deep learning model for defect analysis in industry using casting images</title>
		<author>
			<persName><forename type="first">Rupesh</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Vatsala Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sheifali</forename><surname>Gupta</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2023.120758</idno>
		<ptr target="https://doi.org/10.1016/j.eswa.2023.120758" />
	</analytic>
	<monogr>
		<title level="j">Deepika Koundal // Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">232</biblScope>
			<biblScope unit="page">120758</biblScope>
			<date type="published" when="2023-12-01">1 December 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification</title>
		<author>
			<persName><forename type="first">Sheldon</forename><surname>Mascarenhas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mukul</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="96" to="99" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Classification Similarity Network Model for Image Fusion Using Resnet50 and GoogLeNet</title>
		<author>
			<persName><forename type="first">Siva</forename><surname>Satya Sreedhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intelligent Automation &amp; Soft Computing</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1331" to="1344" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Reminder of the First Paper on Transfer Learning in Neural Networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bozinovski</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Stevo Bozinovski</publisher>
			<biblScope unit="page">12</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="https://www.tensorflow.org/api_docs/python/tf?hl=en" />
		<title level="m">Module: tf | TensorFlow</title>
				<imprint>
			<biblScope unit="volume">14</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<ptr target="https://keras.io/api/" />
		<title level="m">Keras API</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Survey on Multi-Output Learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">W</forename><surname>Tsang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y. -S</forename><surname>Ong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Shen</surname></persName>
		</author>
		<idno type="DOI">10.1109/TNNLS.2019.2945133</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="2409" to="2429" />
			<date type="published" when="2020-07">July 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A survey on multi-output regression</title>
		<author>
			<persName><forename type="first">H</forename><surname>Borchani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varando</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bielza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Larranaga</surname></persName>
		</author>
		<idno type="DOI">10.1002/widm.1157</idno>
	</analytic>
	<monogr>
		<title level="j">WIREs Data Mining Knowl Discov</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="216" to="233" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
