<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Convolutional Neural Networks for Feature Extraction and Automated Target Recognition in Synthetic Aperture Radar Images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">John</forename><surname>Geldmacher</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Christopher</forename><surname>Yerkes</surname></persName>
						</author>
						<author>
							<persName><roleName>Memeber, IEEE</roleName><forename type="first">Ying</forename><surname>Zhao</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Sciences</orgName>
								<orgName type="institution">Naval Postgraduate School</orgName>
								<address>
									<settlement>Monterey</settlement>
									<region>CA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Oettinger School of Science and Technology Intelligence</orgName>
								<orgName type="institution">National Intelligence University</orgName>
								<address>
									<settlement>Bethesda</settlement>
									<region>MD</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Convolutional Neural Networks for Feature Extraction and Automated Target Recognition in Synthetic Aperture Radar Images</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">08275CF704553D3E2802CE261DB14580</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>k-nearest neighbor</term>
					<term>kNN</term>
					<term>deep learning</term>
					<term>Synthetic Aperture Radar images</term>
					<term>SAR images</term>
					<term>transfer learning</term>
					<term>VGG-16</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Advances in the development of deep neural networks and other machine learning algorithms combined with ever more powerful hardware and the huge amount of data available on the internet has led to a revolution in ML research and applications. These advances present massive potential and opportunity for military applications such as the analysis of Synthetic Aperture Radar (SAR) imagery. SAR imagery is a useful tool capable of capturing high resolution images regardless of cloud coverage and at night. However, there is a limited amount of publicly available SAR data to train a machine learning model. This paper shows how to successfully dissect, modify, and re-architect cross-domain object recognition models such as the VGG-16 model, transfer learning models from the ImageNet, and the k-nearest neighbor (kNN) classifier. The paper demonstrates that the combinations of these factors can significantly and effectively improve the automated object recognition (ATR) of SAR clean and noisy images. The paper shows a potentially inexpensive, accurate, transfer and unsurpervised learning SAR ATR system when data labels are scarce and data are noisy, simplifying the whole recognition for the tactical operation requirements in the area of SAR ATR.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>The analysis and classification of targets within imagery captured by aerial and space-based systems provides the US intelligence community and military geospatial intelligence (GEOINT) personnel with important insights into adversary force dispositions and intentions. It has also entered the mainstream thanks to openly available tools like Google Earth. The high resolution of space-based sensors and common use of overhead imagery in everyday life means with the exception of decoys and camouflage, an average person is now reasonably capable of identifying objects in electrooptical (EO) imagery. EO images are, however, limited by cloud coverage and daylight. About half of the time when a satellite in low earth orbit could image a target it will be night, necessitating the use of either an infrared (IR) or a synthetic aperture radar (SAR) sensor. Both IR and SAR images require a trained imagery analyst to reliably identify targets. A repetitive and time consuming task that currently requires human expertise, importantly and creatively, is an ideal problem for deep learning. Automated target recognition (ATR) seeks to reduce the total work load of analysts so that their effort can be spent on the more human-centric tasks like presenting and explaining intelligence to a decision maker. ATR is also intended to reduce the time from collection to exploitation by screening images at machine speeds rather than manually. SAR ATR is complicated by the available data to train and assess machine learning models. Unlike other image classification tasks, there is not a large and freely available amount of training data for researchers. Further, the data that is publicly available only covers a small fraction of the types of targets an effective SAR ATR system would be required to identify.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. ADVANTAGES AND CHALLENGES OF SYNTHETIC APERTURE RADAR (SAR) IMAGES, DATA DESCRIPTION, AND RELATED WORK</head><p>Synthetic Aperture Radar (SAR) is a radar mounted to a moving platform that uses the platform's motion to approximate the effect of a large antenna. The high resolution that can be achieved by creating a radar with an effective aperture much greater in size than is physically possible allows for radar returns to be processed into images similar to what can be achieved with a camera <ref type="bibr" target="#b19">[19]</ref>. SAR imagery provides an important tool for the United States Intelligence Community and military geospatial intelligence (GEOINT) personnel because of its all-weather, day/night collection capability. Additionally, some wavelengths that SAR imaging systems operate in have a degree of foliage and ground penetrating capability allowing for the detection of buried objects or objects under tree cover that would not be observable by other sensors such as EO sensors.</p><p>These important advantages of SAR imaging for GEOINT analysts do come with some significant drawbacks inherent to SAR images. Because SAR images are not true optical images, they are susceptible to noise generated by constructive and destructive interference between radar reflections that appear as bright or dark spots called "speckle" in the image <ref type="bibr" target="#b19">[19]</ref>. Also various materials and geometries will reflect the radar pulses differently, creating blobs or blurs that can obscure the objects physical dimensions. These issues, as well as problems caused The SLICY consisting of simple geometric shapes such as cylinders, edge reflectors, and corner reflectors which could be used for calibration of sensors or for modeling the propagation of radar reflections. Fig. <ref type="figure">1</ref> shows the example photographs and MSTAR images by class. It demonstrates the difficulties an imagery analyst would face when identifying targets in SAR imagery. The vehicles that are easily recognizable in photos become blurs in SAR images. Due to its public availability and ease of access for researchers, the data set has become the standard for SAR image Automated Target Recognition (ATR) classification research.</p><p>ATR in SAR imagery using "shallow" classification methods, which are traditional classifiers applied directly to SAR images with the breakthrough feature extraction layers as demonstrated in convolutional neural networks, produced generally good results. An SVM method proposed by <ref type="bibr" target="#b27">[27]</ref> achieved 91% accuracy in a five-class test <ref type="bibr" target="#b27">[27]</ref>, while a Bayesian classifier reported a 95.05% accuracy in a 10-classes test <ref type="bibr" target="#b12">[13]</ref>.</p><p>In recent years, the work on classification of SAR imagery has focused on the use of CNNs. In 2015, Morgan showed that a relatively small CNN could achieve 92.1% accuracy across the 10-class of the MSTAR dataset, roughly in line with the shallow methods previously explored. Morgan's method also showed that a network trained on nine of the MSTAR target classes could be retrained to include a tenth class 10-20 times faster than training a 10-class classifier from scratch. The ability to more easily adapt the model to changes in target sets represents an advantage over shallow classification techniques <ref type="bibr" target="#b10">[11]</ref>. This is especially valuable in a military ATR context given the fluid nature of military operations, where changes to the order of battle may necessitate updating a deployed ATR system. Malmgren-Hansen et al., explored transfer learning from a CNN pre-trained on simulated SAR images generated by using ray tracing software and detailed computer aided design models of target systems. They showed that model performance was improved, especially in cases where the amount of training data was reduced <ref type="bibr" target="#b9">[10]</ref>. The technique of generating simulated SAR images for training could also be valuable in a military SAR ATR context where an insufficient amount of training data for some systems may exist.</p><p>The use of a linear SVM as a replacement for the softmax activation that is typically used for multiclass classifiers in neural networks has been shown to be potentially more effective for some classification tasks <ref type="bibr" target="#b22">[22]</ref>. Transfer learning from ImageNet to MSTAR with an SVM classifier was explored by <ref type="bibr" target="#b0">[1]</ref> in 2018. Their methodology compared the performance of an SVM classifier trained on mid-level feature data extracted from multiple layers from AlexNet, GoogLeNet, and VGG-16 neural networks without retraining the feature extracting network. Although they reported over 99% accuracy when classifying features extracted from mid-level convolutional layers from AlexNet, performance of the SVM on features from fully-connected layers did not achieve 80% accuracy. The best performance reported on from the VGG-16 architecture was 92.3% from a mid-level convolutional layer, but only 49.2% and 46.3% from features extracted in the last two fullyconnected layers <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. TRANSFER LEARNING AND FEATURE EXTRACTION</head><p>CNNs require a very large amount of data to train an accurate model and it is not uncommon for data sets with tens or even hundreds of thousands of images to be used when training a model. Transfer learning presents one possible solution when training a CNN on a limited data set by leveraging knowledge from a previously learned source task to aid in learning a new target task <ref type="bibr" target="#b13">[14]</ref>. In an image classification problem, transfer learning works by training a CNN on a data set that has a very large number of images and freezing the parameters for a certain number of layers and extracting midlevel feature representations before training further layers and the final classification layer <ref type="bibr" target="#b6">[7]</ref>.</p><p>ImageNet is an open source labeled image database organized in a branching hierarchical method of "synonym sets" or "synsets". For example, the "tank" synset is found in a tree going from vehicle to wheeled vehicle to self-propelled vehicle to armored vehicle to tank. The ImageNet database consists of over 14 million labeled images organized into over 21,000 synsets. Pre-trained ImageNets are often used in transfer learning.</p><p>Transfer learning is typically used when source and target tasks are not too dissimilar in order to avoid negative transfer. Negative transfer occurs when the features learned in the transfer learning method actually handicap the model performance <ref type="bibr" target="#b13">[14]</ref>. However, transfer learning becomes more useful when a curious phenomenon that many deep neural networks trained on natural images learn similar features across images from different domains.</p><p>Evidence shows that low and mid-level features could represent basic ATR features in images such as texture, corners, edges, and color blobs <ref type="bibr" target="#b8">[9]</ref>, and the low and mid-level neural network feature extraction function resembles the actual biological and human neurons' function. Low and mid-level of features extracted from CNNs are likely common across even dissimilar data sets. A transfer learning approach between different domains is feasible and ATR tasks are evidently successful in cross-domain applications <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b7">[8]</ref>. For example, the application of transfer learning to remote sensing target detection and classification was studied <ref type="bibr" target="#b16">[16]</ref>, which showed that a CNN classifier trained on a photographic data set could be retrained to perform remote sensing classification of ships at sea with a good performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. SAR ATR: FEATURE EXTRACTION COMBINED WITH SHALLOW CLASSIFIERS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Multistep Classifier</head><p>In practice and in cross-domain applications, very few people train an entire CNN from scratch because it is relatively rare to have a data set of sufficient size. For this reason, transfer learning with feature extraction combined with shallow classifiers are suitable choices for SAR images.</p><p>The network architecture employed in this paper was a modified VGG-16 architecture <ref type="bibr" target="#b18">[18]</ref>. The original VGG-16 architecture is shown in Fig. <ref type="figure" target="#fig_1">2</ref>. It consists of two or three linked convolutional/pooling blocks, three fully-connected layers, and a softmax activation in the end to determine the class label. The network employs a 3x3 kernel and a stride of one so that each pixel is the center of a convolutional step. The architecture has been modified to freeze model weights for the first two convolutional/pooling blocks (e.g., the first six layers). The model top has also been replaced with a fullyconnected layer, a dropout layer to mitigate overfitting, and two final fully-connected layers with a softmax activation for classification <ref type="bibr" target="#b16">[16]</ref>. This is also referred as a modified VGG-16 architecture or a VGG-16 architecture in this paper. It was initialized with the ImageNet weights and had the first  <ref type="figure">3</ref>. Our method, as in Fig. <ref type="figure">4</ref>, shows that the dense layer of 1024 features extracted are saved and used as the input to a shallow classifier.</p><p>In our experiment, the standard VGG-16 model is implemented in the Keras application program interface (API) with TensorFlow as the backend. The ImageNet weights available in the Keras are ported from the Visual Geometry Group at Oxford University that developed the VGG architecture for ILSVRC-2014 localization and classification tasks <ref type="bibr" target="#b18">[18]</ref>. We also use, Orange, which is an open source data science and machine learning toolkit that allows users to easily manipulate data through a graphical user interface. Orange has several built-in machine learning algorithms and simplifies the data management and pre-processing requirements to allow users to experiment with approaches to machine learning and data science <ref type="bibr" target="#b2">[3]</ref>.</p><p>A CNN is trained on the 2200 training images with a 20% validation split. The training and test data were both then run through the retrained neural network. The last fully connected layer before the neural network's output was saved as a 1024dimensional vector for each image as shown in Fig. <ref type="figure">3</ref>.</p><p>The extracted features run through the Orange workflow are pictured in Fig. <ref type="figure">5</ref>. The precision and recall are used to compare the base CNN performance, kNN, SVM, and random forest </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Results</head><p>The baseline model, which is the modified VGG-16, is shown in Fig. <ref type="figure">3</ref>. The modified VGG-16 without transfer learning and trained exclusively on MSTAR, resulted in an average precision and recall of 0.96. The same modified VGG-16 with full transfer learning of the convolutional layers with weights from ImageNet resulted in an average precision and recall of 0.88. Although, the transfer learning approach has the advantage of converging much more quickly than the CNN initialized with random weights, the full transfer learning of all convolutional weights and only retraining the CNN top did not match the performance of the non-transfer learning approach, suggesting some negative transfer occurs in the later convolutional layers. As shown in Fig. <ref type="figure">6</ref>, the best performed is the modified VGG-16 with partial transfer learning in Fig. <ref type="figure">3</ref> and resulted an average precision and recall of 0.98. The multistep classifier using a kNN classifier in Fig. <ref type="figure">4</ref> was able to match the best baseline performance with an average precision and call of 0.98, while the SVM and random forest classifiers fell short of the baseline model's performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Adding Noise</head><p>As described before, ATR of SAR images are typically sensitive to the noise in the images. A CNN is known to be vulnerable to the noisy models both from environmental perturbation or adversarials' deliberated manipulations <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b21">[21]</ref>. To study the effect, random Gaussian noise with a noise factor of 0.2 was added to the images from the data set. Fig. <ref type="figure">7</ref> shows an example of an original SAR image with one added noise. The feature extraction from CNN and follow on shallow classification process was then repeated without retraining the base model in order to test the robustness of the model. The baseline model (i.e., modified VGG-16 model) was then retrained on the noisy images for 30 epochs and accuracy was compared.</p><p>Neither the neural network nor any of the multistep classifiers proved robust enough to handle the addition of random noise to the images. However, after retraining the kNN and SVM multistep classifiers perform better than the modified VGG-16 with partial transfer learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. DISCUSSION</head><p>Performance on the SLICY class is of interest because it demonstrates the model's ability to discriminate an invalid target from a valid target. All other classes, with the exception of the D7, are former Soviet Union military equipment. The D7 is a Caterpillar bulldozer. Up-armored versions of the D7 and related equipment are often used in combat engineering roles. In a military context this means they are likely to be a valid target. As demonstrated by the high precision and recall in this class across all models, valid targets are very rarely classified as a SLICY (high precision) and the random objects are not being accepted as valid targets (high recall).</p><p>The performance of the kNN classifier is also notable since the use of an SVM for classification after feature extraction is previously studied; however, little research has been done on To further explore the relations of feature extraction, transfer learning, and kNN, we ran an additional experiment where we first extracted the transfer weights of the first six layers of the VGG-16 architecture from ImageNet. Since the flatten dimension is 32x32x128=131,072, as shown in Fig. <ref type="figure" target="#fig_4">9</ref>, we applied the unsupervised learning k-means algorithm to group the 131,072 dimension into 2048 clusters. The reasoning here is that the first six layers probably embed the best features (texture, corners, edges, and color blobs) that can be used for classification. Finally, We performed kNN and other supervised learning methods in Orange based on the 2048 dimensional train and test data. Fig. <ref type="figure" target="#fig_5">10</ref> shows the test data results from Orange for the VGG6-transfer-kmeans-kNN method with an average precision and recall of 0.93. The six layers of transfer learning together with k-means and kNN provide an inexpensive (without GPU or AWS, for exam-ple) and no supervised learning or no class labels required approach for SAR ATR. Recently, various learning-to-hash algorithms <ref type="bibr" target="#b24">[24]</ref> are used to approximate the exact nearest neighbors, which translates a supervised learning problem and kNN into an index/search problem <ref type="bibr" target="#b14">[15]</ref>, and simplifies the whole recognition for the tactical operation requirements in the area of SAR ATR. If there are no class labels of SAR available, our multistep classifiers with transfer learning and kNN can provide an unsupervised classification with a high accuracy and confidence to match an object which looks like another object seen before. Currently, the analysis community does not have an established standard for the percent of correctly identified targets by an imagery analyst. Instead the analysis relies on the user's experience and confidence in their own work, providing responses such as "possible main battle tank" or "likely BMP-2", and thus a direct comparison to expert-level performance is difficult to establish. Both the baseline model employing transfer learning and the shallow classifiers using a neural network as a feature extractor performed with a high degree of accuracy and would be valuable in an operational context as an aid to GEOINT analysts.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Fig. 1. Example Photographs and MSTAR Images by Class. Photograph of BMP-2 from https : //www.militaryf actory.com/armor/detail.asp?armor id = 50, photograph of BTR-70 from https : //military.wikia.org/wiki/BT R − 70. All other photographs and SAR images adapted from the MSTAR dataset.</figDesc><graphic coords="2,48.96,53.14,251.06,188.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. The Original VGG-16 Architecture</figDesc><graphic coords="3,311.97,161.63,251.06,97.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .Fig. 6 .</head><label>46</label><figDesc>Fig. 4. The multistep classifier: Extract features from the VGG-16 and then apply a shallow classifier</figDesc><graphic coords="3,311.97,643.72,251.06,74.19" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 7 .Fig. 8 .</head><label>78</label><figDesc>Fig. 7. Examples of Noisy SAR images</figDesc><graphic coords="4,311.97,53.14,251.06,132.37" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 9 .</head><label>9</label><figDesc>Fig. 9. VGG-16 TensorFlow architecture layout</figDesc><graphic coords="5,99.17,53.14,150.63,210.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 10</head><label>10</label><figDesc>also shows the comparison of kNN with other supervised learning methods. The kNN method is the best among all the methods for the average precision and recall, where classification of the SLICY has a precision of 0.98 and a recall of 1. The future work is to test on more and different data sets (e.g., EO and IR data) to validate if the multistep methods can apply to cross-domain ATR problems. VI. CONCLUSION Cross-domain transfer learning from photographs to SAR imagery is effective for training a neural network both for feature extraction and classification. A retrained neural network can function as an efficient feature extractor for training a shallow classifier. kNN and SVM classifiers are potentially useful replacement for softmax activation in a neural network. Multistep classification methods using a shallow classifier trained on features extracted from a neural network, outperformed the base neural network when tested on noisy data and as the amount of training data decreases. This is valuable to improve CNN in a broader machine vision community by applying feature extraction followed by shallow classifiers for clean and noisy images. Transfer learning and kNN multistep classification methods could be significant in terms of setting up a robust image indexing system with minimum supervised training and learning required.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Automatic target recognition in SAR images: Comparison between pretrained CNNs in a tranfer learning based approach</title>
		<author>
			<persName><forename type="first">Al</forename><surname>Mufti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Al Hadhrami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Taha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Werghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename></persName>
		</author>
		<idno type="DOI">10.1109/ICAIBD.2018.8396186</idno>
		<ptr target="https://doi.org/10.1109/ICAIBD.2018.8396186" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Big Data (ICAIBD)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Deep learning and alternative learning strategies for retrospective real-world clinical data</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kingsbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">B</forename><surname>Storlie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">B</forename><surname>Habermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Naessens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41746-019-0122-0</idno>
		<ptr target="https://doi.org/10.1038/s41746-019-0122-0" />
	</analytic>
	<monogr>
		<title level="j">Digital Medicine</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">43</biblScope>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Orange: From Experimental Machine Learning to Interactive Data Mining</title>
		<author>
			<persName><forename type="first">J</forename><surname>Demšar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zupan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Leban</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Curk</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-30116-558</idno>
		<ptr target="https://doi.org/10.1007/978-3-540-30116-558" />
	</analytic>
	<monogr>
		<title level="m">Knowledge Discovery in Databases: PKDD 2004</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">3202</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">ImageNet: A large-scale hierarchical image database</title>
		<author>
			<persName><forename type="first">J</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fei-Fei</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2009.5206848</idno>
		<ptr target="https://doi.org/10.1109/CVPR.2009.5206848" />
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recogition</title>
				<imprint>
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Support Vector Machine-Introduction to Machine Learning Algorithms</title>
		<author>
			<persName><forename type="first">R</forename><surname>Gandhi</surname></persName>
		</author>
		<ptr target="https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47" />
	</analytic>
	<monogr>
		<title level="m">Towards Data Science</title>
				<imprint>
			<date type="published" when="2018-06-07">2018. June 7</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Deep learning</title>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">SAR image classification based on the multi-layer network and transfer learning of mid-level representations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1109/IGARSS.2016.7729290</idno>
		<ptr target="https://doi.org/10.1109/IGARSS.2016.7729290" />
	</analytic>
	<monogr>
		<title level="m">IEEE International Geoscience and Remote Sensing Symposium</title>
				<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization</title>
		<author>
			<persName><forename type="first">P</forename><surname>Khosravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kazemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Malmsten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Toschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zisimopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sigaras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lavery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hickman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Meseguer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Rosenwaks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Elemento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Zaninovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Hajirasouliha</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41746-019-0096-y</idno>
		<ptr target="https://doi.org/10.1038/s41746-019-0096-y" />
	</analytic>
	<monogr>
		<title level="j">Digital Medicine</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">21</biblScope>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">From BoW to CNN: Two Decades of Texture Representation for Texture Classification</title>
		<author>
			<persName><forename type="first">L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fieguth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chellappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pietikäinen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="page" from="74" to="109" />
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Improving SAR Automatic Target Recognition Models With Transfer Learning From Simulated Data</title>
		<author>
			<persName><forename type="first">D</forename><surname>Malmgren-Hansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kusk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Enghold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Skriver</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Geoscience and Remote Sensing Letters</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1484" to="1488" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Deep convolutional neural networks for ATR from SAR imagery</title>
		<author>
			<persName><forename type="first">D</forename><surname>Morgan</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.2176558</idno>
		<ptr target="https://doi.org/10.1117/12.2176558" />
	</analytic>
	<monogr>
		<title level="j">Algorithms for Synthetic Aperture Radar Imagery</title>
		<imprint>
			<biblScope unit="volume">XXII</biblScope>
			<date type="published" when="2015">2015. 9475</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Neural Networks and Deep Learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nielsen</surname></persName>
		</author>
		<ptr target="http://neuralnetworksanddeeplearning.com/" />
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Determination Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">SAR ATR performance using a conditionally Gaussian model</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>O'sullivan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Devore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kedia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Millier</surname></persName>
		</author>
		<idno type="DOI">10.1109/7.913670</idno>
		<ptr target="https://doi.org/10.1109/7.913670" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Aerospace and Electronic Systems</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="91" to="108" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A Survey on transfer learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TKDE.2009.191</idno>
		<ptr target="https://doi.org/10.1109/TKDE.2009.191" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">10</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">T</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Boxberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Weichert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Navab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Marr</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Multi-task learning of a deep k-nearest neighbour network for histopathological image classification and retrieval</title>
		<idno type="DOI">10.1101/661454</idno>
		<ptr target="https://doi.org/10.1101/661454" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Convolutional Neural Networks For Detection And Classification Of Maritime Vessels In Electro-Optical Satellite Imagery</title>
		<author>
			<persName><forename type="first">K</forename><surname>Rice</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
		<respStmt>
			<orgName>Naval Postgraduate School</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Master&apos;s thesis</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">ImageNet Large Scale Visual Recognition Challenge</title>
		<author>
			<persName><forename type="first">O</forename><surname>Russakovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Krause</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Satheesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Karpathy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kholsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bernstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Berg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fei-Fei</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11263-015-0816-y</idno>
		<ptr target="https://doi.org/10.1007/s11263-015-0816-y" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">115</biblScope>
			<biblScope unit="page" from="211" to="252" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Very Deep Convolutional Networks For Large-Scale Image Recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>Simonyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zisserman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Skolnik</surname></persName>
		</author>
		<title level="m">Introduction to Radar Systems (Second)</title>
				<imprint>
			<publisher>McGraw-Hill, Inc</publisher>
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Simple Introduction to Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Stewart</surname></persName>
		</author>
		<ptr target="https://towardsdatascience.com/simple-introduction-to-convolutional-neural-networks-cdf8d3077bac" />
	</analytic>
	<monogr>
		<title level="m">Towards Data Science</title>
				<imprint>
			<date type="published" when="2019-02-26">2019. February 26</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">On the Robustness of Deep K-Nearest Neighbors</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sitawarin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wagner</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1903.08333v1</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Deep Learning using Linear Support Vector Machines</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tang</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1306.0239" />
	</analytic>
	<monogr>
		<title level="m">ICML Challenges in Representation Learning</title>
				<imprint>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">SAR ATR: so what&apos;s the problem? An MSTAR perspective</title>
		<author>
			<persName><forename type="first">Timothy</forename><forename type="middle">D</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeff</forename><forename type="middle">J</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lannie</forename><forename type="middle">J</forename><surname>Hudson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Michael</surname></persName>
		</author>
		<author>
			<persName><surname>O'connor</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.357681</idno>
		<ptr target="https://doi.org/10.1117/12.357681" />
		<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page">3721</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A Survey on Learning to Hash</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Shen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">9</biblScope>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Deep Sparse Rectifier Neural Networks</title>
		<author>
			<persName><forename type="first">X</forename><surname>Glorot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v15/glorot11a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics</title>
				<editor>
			<persName><forename type="first">Geoffrey</forename><surname>Gordon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">David</forename><surname>Dunson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Miroslav</forename><surname>Dudík</surname></persName>
		</editor>
		<meeting>the Fourteenth International Conference on Artificial Intelligence and Statistics</meeting>
		<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="315" to="323" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Understanding Random Forest: How the Algorithm Works and Why it Is So Effective</title>
		<author>
			<persName><forename type="first">T</forename><surname>Yiu</surname></persName>
		</author>
		<ptr target="https://towardsdatascience.com/understanding-random-forest-58381e0602d2f3c8" />
	</analytic>
	<monogr>
		<title level="m">Towards Data Science</title>
				<imprint>
			<date type="published" when="2019-06-12">2019. June 12</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Support vector machines for SAR automatic target recognition</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Principe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Aerospace and Electronic Systems</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="643" to="654" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
