<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A learning based feature point detector</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">A</forename><surname>Verichev</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Samara National Research University</orgName>
								<address>
									<addrLine>34 Moskovskoe Shosse</addrLine>
									<postCode>443086</postCode>
									<settlement>Samara</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A learning based feature point detector</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">31AB261F0E7C5E54AD62AD095CA5DA25</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:32+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>image feature points</term>
					<term>image feature points detector</term>
					<term>image moments</term>
					<term>image moment invariants</term>
					<term>machine learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose a learning-based image feature points detector. Instead of giving an explicit definition for feature point we apply the methods of machine learning to infer it inductively using a representative training set. This allows for a flexible tuning of the proposed detector to a specific problem that is described by a training set of desired responses. To increase feature points' repeatability and robustness to various image transformations the feature space of the learning algorithm includes raw image moments and image moment invariants. Experiments demonstrate high flexibility in tuning the detector to a specific task, acceptable repeatability of the feature points and robustness to various image transformations.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Feature point is a piece of information which is relevant to solving a certain application-related computational task. Feature points find their use in numerous applications such as image stitching, stereo correspondence, locating and tracking of a moving object, object detection and recognition, and others <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. The ubiquitous usage of feature points is a direct consequence of their properties <ref type="bibr" target="#b2">[3]</ref>:  Repeatability: Given two images of the same object or scene, a high percentage of the features detected on the scene visible in both images should be found in both images.  Informativeness: The intensity patterns underlying the detected features should show a lot of variation.  Locality: The features should be local, so as to reduce the probability of occlusion and to allow simple model approximations of the geometric and photometric deformations between two images.  Quantity: The number of detected features should be sufficiently large, such that a reasonable number of features are detected even on small objects.  Accuracy: The detected features should be accurately localized.  Efficiency: The detection of features in a new image should allow for time-critical applications.</p><p>Algorithms and methods that detect image feature points by making local decisions are called feature points detector. An abundance of image feature points detectors is known, most of which are based on a certain criterion -a heuristics that implicitly defines what a term feature point constitutes. Generally these heuristics can be classified into three categories <ref type="bibr" target="#b3">[4]</ref>:  Gradient-based: A majority of image feature points detectors is based on computation of gradients of intensity function, for example Förstner <ref type="bibr" target="#b4">[5]</ref>, Harris <ref type="bibr" target="#b5">[6]</ref>, Shi-Tomasi <ref type="bibr" target="#b6">[7]</ref>.  Template-based: Feature points are found by comparing the intensity of surrounding pixels with that of center pixels which is governed by some template. The well-known template-based detectors are SUSAN <ref type="bibr" target="#b7">[8]</ref>, FAST <ref type="bibr" target="#b8">[9]</ref>, AGAST <ref type="bibr" target="#b9">[10]</ref>.  Contour-based: A feature point is defined as the intersecting point of two adjacent edge lines, examples are DoG-curve <ref type="bibr" target="#b10">[11]</ref>,</p><p>ANDD <ref type="bibr" target="#b11">[12]</ref>.</p><p>However, formulating a heuristics for an image feature points detector requires a well-formed application-dependent definition of the term feature point, which in turn requires some level of expertise in the application domain. Moreover, a strictly stated criterion, although sharpening performance, diminishes its flexibility to adjust to a particular problem, which renders all the possible usages outside the destined application moot.</p><p>The goal of this work is to dispense with defining the term feature point altogether and focus on the properties we wish the feature points to possess. With that goal in mind we resort to machine learning methods. Image raw moments and image moment invariants are used along with some other local characteristics of image points to form a feature space of a learning algorithm. The detector is trained to solve a specific problem on a relevant and carefully collected training set. This effectively defines the term feature point implicitly, since it's inductively inferred from the training examples.</p><p>The proposed method is described in full detail in section 2, along with the learning algorithm, its feature space and the procedures for collecting training and test sets. Evaluation criteria of a trained detector's performance and the results of experimental evaluation are described in section 3. We conclude with a discussion of these results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Proposed method</head><p>The proposed learning-based feature points detector is based on the idea of transforming detection task into a classification task as suggested in <ref type="bibr" target="#b12">[13]</ref>, which boils down to training the detector's classifier on a set of the desired responses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Feature space</head><p>The first step towards constructing our detector is to define the classifier's feature space, which is an ℝ 15 vector space. Each pixel of an image 𝐼[𝑥, 𝑦] is mapped to a certain vector in this feature space using a locally defined operator 𝑃 9×9 → ℝ 15 , where 𝑃 = {𝑛: 0 ≤ 𝑛 &lt; 256} is a set of intensities of a grayscale image. The features of the feature space are described below.</p><p>The first two features are standard deviation of a standardized local area, 𝜙 1 , and standard deviation divided by the norm of the local area, 𝜙 2 :</p><formula xml:id="formula_0">𝜙 1 = √ 1 80 ∑ ∑ 1 𝑛 2 (𝐼[𝑥 + 𝑖, 𝑦 + 𝑗] − 𝐼 ̅ ) 2 4 𝑗=−4 4 𝑖=−4 , 𝜙 2 = 𝜙 1 𝑛 ,<label>(1)</label></formula><p>where norm 𝑛 and local mean 𝐼 ̅ are defined:</p><formula xml:id="formula_1">𝐼 ̅ = 1 81 ∑ ∑ 𝐼[𝑥 + 𝑖, 𝑦 + 𝑗] 4 𝑗=−4 4 𝑖=−4 , 𝑛 = √∑ ∑ (𝐼[𝑥 + 𝑖, 𝑦 + 𝑗]) 2 4 𝑗=−4 4 𝑖=−4 .</formula><p>The use of these features is motivated by their sensitivity to monotonous and textured areas.</p><p>The next four features are chosen to be central image moments of a local image area: 𝜙 𝑡+3 = 𝜇 𝑡𝑡 , 0 ≤ 𝑡 ≤ 3. The central moments are defined <ref type="bibr" target="#b13">[14]</ref>:</p><formula xml:id="formula_2">𝜇 𝑖𝑗 = ∑ ∑ 𝑘 𝑖 • 𝑙 𝑗 • 1 81 𝐼[𝑥 + 𝑘, 𝑦 + 𝑙] 4 𝑙=−4 4 𝑘=−4 . (<label>2</label></formula><formula xml:id="formula_3">)</formula><p>To induce invariance to rotation transformations the following Hu invariant image moments and Flusser moments are used <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>: (</p><formula xml:id="formula_4">𝜙 7 = 𝜇 20 + 𝜇 02 , 𝜙 8 = (𝜇 20 − 𝜇 02 ) 2 + 4𝜇</formula><formula xml:id="formula_5">)<label>3</label></formula><p>Moments calculation is an intensive computational task that requires of a lot of operations. To reduce the number of arithmetical operations we apply the recursive method of moments calculation based on the use of integer factorial polynomials <ref type="bibr" target="#b15">[16]</ref>.</p><p>The last feature that characterizes misalignment of centre of local area and its centre of mass is defined:</p><formula xml:id="formula_6">𝜙 15 = √(𝑥 𝑐 − 𝑥) 2 + (𝑦 𝑐 − 𝑦) 2 , (<label>4</label></formula><formula xml:id="formula_7">)</formula><p>where 𝑥 𝑐 = 𝜇 10 /𝜇 00 and 𝑦 с = 𝜇 01 /𝜇 00 . The set of the features 𝜙 𝑖 , 1 ≤ 𝑖 ≤ 15, defined by ( <ref type="formula" target="#formula_0">1</ref>) -( <ref type="formula" target="#formula_6">4</ref>), with a usual addition and scalar multiplication operations form the feature vector space.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Tuning the detector 2.2.1. Collecting a training set</head><p>Tuning the detector requires a training set that consists of the desired detector's responses. Depending on the application there are various ways the set can be obtained:  manually, involving experts of the domain;  automatically, using well-known feature points detectors such as Harris or Canny;  combining the two.</p><p>In case there is a human involvement of any kind it is inevitable for a training set to contain a so called training noise <ref type="bibr" target="#b16">[17]</ref>. Besides, in a typical scenario a number of feature points is small compared to the other points. To alleviate these negative effects the neighbouring points of the feature points can be considered feature points as well.</p><p>Provided an application requires high level of robustness to certain transformations, a training set can be enlarged to contain the so called virtual examples <ref type="bibr" target="#b17">[18]</ref>. To this end every image used to form a training set is transformed according to some transformation. Since the parameters of that transformation are known, the elements of the original image can be mapped onto the transformed image, which makes it possible to extract feature vectors of the points of the transformed image that correspond to the feature points of the original image. These new feature vectors are the virtual examples that convey information about various effects the transformation have on the feature vectors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Training a classifier</head><p>With a training set at hand we can pose and solve a supervised learning problem. Since the number of the feature vectors in a training set is typically quite large we chose to apply nonparametric probability density estimation approach. Let 𝐷 = {(𝒙 𝑖 , 𝑦 𝑖 )} 𝑖=1 𝑁 denote training set, where 𝒙 𝑖 is a feature vector, 𝑦 𝑖 is its label, 𝑦 𝑖 ∈ {𝐶 1 , 𝐶 2 }. 𝐶 1 corresponds to feature points and 𝐶 2 corresponds to the other points. Then, an estimation of conditional probability density function is defined as follows:</p><formula xml:id="formula_8">𝑝(𝒙|𝐶 𝑖 ) ∝ ∑ [𝑦 𝑗 = 𝐶 𝑖 ] 𝑁 𝑗=1 𝐾 ( ‖𝒙−𝒙 𝑗 ‖ ℎ ), (<label>5</label></formula><formula xml:id="formula_9">)</formula><p>where 𝐾 is a kernel function, ℎ is kernel's width parameter. By the Bayes' Theorem:</p><formula xml:id="formula_10">𝑝(𝐶 𝑖 |𝒙) ∝ 𝑝(𝒙|𝐶 𝑖 ) • 𝜋 ̂𝑖,<label>(6)</label></formula><p>where 𝜋 ̂𝑖 is an estimate of prior probability of i th class:</p><formula xml:id="formula_11">𝜋 ̂𝑖 = 1 𝑁 ∑ [𝑦 𝑗 = 𝐶 𝑖 ] 𝑁 𝑗=1 . (<label>7</label></formula><formula xml:id="formula_12">)</formula><p>Define a characteristic function of a feature point 𝑙(𝒙):</p><formula xml:id="formula_13">𝑙(𝒙) = 𝑙𝑛(𝑝(𝐶 1 |𝒙)) − 𝑙𝑛(𝑝(𝐶 2 |𝒙)).<label>(8)</label></formula><p>In order to smooth the detector's response we filter the characteristic function 𝑙(𝒙) using a local peak filter. The peak filter suppresses non-maximal values in a local 3×3 neighbourhood of the point 𝒙:</p><formula xml:id="formula_14">𝑙 ̃(𝒙) = { 𝑙(𝒙), 𝑙(𝒙) &gt; 𝑙(𝒈) + 𝛿 ∀𝒈 ∈ 𝑊 ∖ {𝒙} 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 , (<label>9</label></formula><formula xml:id="formula_15">)</formula><p>where 𝑊 is a set of all feature vectors from the local neighbourhood, 𝛿is some threshold. From ( <ref type="formula" target="#formula_13">8</ref>) and ( <ref type="formula" target="#formula_14">9</ref>) we infer the decision rule:</p><formula xml:id="formula_16">𝑦(𝒙) = { 𝐶 1 , 𝑙 ̃(𝒙) &gt; 𝑡 = 𝑙𝑛 ( 𝜋 ̂2 𝜋 ̂1) 𝐶 2 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 . (<label>10</label></formula><formula xml:id="formula_17">)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experimental evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Experimental setup</head><p>To experimentally evaluate the proposed detector we built a set of images. The set contains a series of 10 overlapping images of 6 different scenes, 60 images in total. Figure <ref type="figure" target="#fig_0">1</ref> shows three images of one of these scenes. Each of the 6 groups of images was split in relation 8:2 to form training set 𝐷 and test set 𝐶, respectively. We chose to use Harris <ref type="bibr" target="#b5">[6]</ref> corner detector to detect feature points. The training set was enlarged by the virtual examples as described in section 2.2.1 and the transformations that were applied are described in section 3.3.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Evaluation of training accuracy</head><p>Let 𝑉 = {(𝒙 𝑖 , 𝑦 𝑖 )} 𝑖=1 𝑁 be training or test set. The primary criterion of detector's performance on the set V is its accuracy:</p><formula xml:id="formula_18">𝐴(𝑉) = 1 𝑁 ∑ [𝑦(𝒙 𝑖 ) = 𝑦 𝑖 ] 𝑁 𝑖=1 . (<label>11</label></formula><formula xml:id="formula_19">)</formula><p>Besides the accuracy two more criteria are used: precision 𝑃 and recall 𝑅 <ref type="bibr" target="#b18">[19]</ref>. Precision is the fraction of relevant instances over the retrieved instances, while recall is the fraction of relevant instances among the retrieved ones over the total number of relevant instances in the set.</p><p>Let 𝐹𝑃, 𝐹𝑁 and 𝑇𝑃 denote false positives, false negatives and true positives, respectively. Then,</p><formula xml:id="formula_20">𝐹𝑃(𝑉) = ∑ [𝑦(𝒙 𝑖 ) = 𝐶 1 ] • [𝑦 𝑖 = 𝐶 2 ] 𝑁 𝑖=1</formula><p>,</p><formula xml:id="formula_21">𝐹𝑁(𝑉) = ∑ [𝑦(𝒙 𝑖 ) = 𝐶 2 ] • [𝑦 𝑖 = 𝐶 1 ] 𝑁 𝑖=1 , 𝑇𝑃(𝑉) = ∑ [𝑦(𝒙 𝑖 ) = 𝑦 𝑖 ] 𝑁 𝑖=1 .<label>(12)</label></formula><p>Precision and recall are defined: .</p><p>(</p><formula xml:id="formula_22">)<label>13</label></formula><p>The proposed detector was first trained on the training set. Accuracy, precision and recall were evaluated on the training set 𝐷 and test set 𝐶. The results are shown in table <ref type="table" target="#tab_1">1</ref>. Taking into account a fairly large size of the sets, the data suggests an adequate quality of training.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Repeatability evaluation of the detector</head><p>As mentioned in introduction, repeatability is one of the most important properties of the feature points. Along with its importance, repeatability allows for an objective and qualitative evaluation. Hence, we used repeatability to evaluate the performance of the proposed detector.</p><p>The procedure for repeatability evaluation is outlined below.</p><p> An original image is used to find a set of feature points 𝑃 𝑜 .  The original image is transformed by one of the transformations (cf. the next list below).  The transformed image is used to find a set of feature points 𝑃 𝑡 .  Since parameters of the transformation are known, coordinates of the points 𝑃 𝑜 of the original image can be mapped onto the transformed image. Thus, the points in the set 𝑃 𝑜 are mapped onto the transformed image, forming a set 𝑃 𝑚 .  The sets 𝑃 𝑚 and 𝑃 𝑡 are matched. Two points 𝑎 ∈ 𝑃 𝑚 and 𝑏 ∈ 𝑃 𝑡 are considered equal if 𝑎 ∈ 𝑉 𝜀 (𝑏), 𝜀 = 2.0.  As a result of the comparison performed in the previous step we find three sets of points: 𝑃 𝑇𝑃 are the points found on both sets, 𝑃 𝐹𝑃 are new points that were not found on the original image but were found on the transformed image, 𝑃 𝐹𝑁 are the missed points that were found on the original image and were not found on the transformed image. The cardinalities of these sets are, respectively, 𝑇𝑃, 𝐹𝑃 and 𝐹𝑁 values of the proposed detector. These values are used to calculate the detector's accuracy, precision and recall.</p><p>To evaluate repeatability we used the following transformations of the images:  rotation by angle 𝛼, −45°≤ 𝛼 ≤ 45°, 𝛼 is increased by 3°;  sub-pixel shift by 𝑡, 0.25 ≤ 𝑡 ≤ 0.75, 𝑡 is increased by 0.05;  scaling by 𝑠, 0.5 ≤ 𝑠 &lt; 1.5, 𝑠 is increased by 0.1</p><p>The results of the repeatability evaluation of the proposed detector that was trained on the training set 𝐷 are shown on fig. <ref type="figure" target="#fig_2">2</ref>. The detector's performance can be considered adequate on rotated images for −9°&lt; 𝛼 &lt; 9° and on scaled images for 0.8 ≤ 𝑠 ≤ 1.2. The performance on shifted images is high for the whole range of the parameter 𝑡.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>In this paper we investigated a relatively new approach to feature point detection. Contrary to the standard approach to the problem, we didn't formulate any heuristics-based definition of the term feature point but tried to infer it inductively using the methods of machine learning and a representative training set. This enabled us to tune the proposed detector to a specific problem at hand. The results of the experimental evaluation of the detector verify that such a tuning is in fact possible. Moreover, the detector showed acceptable robustness to rotation and scaling transformation, and high robustness to sub-pixel shift transformation. This suggests a great potential of the learning-based approach to feature points detection.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Example images of a scene.</figDesc><graphic coords="4,229.99,33.52,141.90,141.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Repeatability of the detector evaluated for various transformations: (a) rotation, (b) scaling, (c) translation.</figDesc><graphic coords="5,211.77,340.62,232.87,185.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>𝜇 11 [(𝜇 30 + 𝜇 12 ) 2 − (𝜇 03 + 𝜇 21 ) 2 ] − (𝜇 20 − 𝜇 02 )(𝜇 30 + 𝜇 12 )(𝜇 03 + 𝜇 21 ).</figDesc><table><row><cell>11 2 ,</cell></row><row><cell>𝜙 9 = (𝜇 30 − 3𝜇 12 ) 2 + (3𝜇 21 − 𝜇 03 ) 2 ,</cell></row><row><cell>𝜙 10 = (𝜇 30 + 𝜇 12 ) 2 + (𝜇 21 + 𝜇 03 ) 2 ,</cell></row><row><cell>𝜙 11 = (𝜇 30 − 3𝜇 12 )(𝜇 30 + 𝜇 12 )[(𝜇 30 + 𝜇 12 ) 2 − 3(𝜇 21 + 𝜇 03 ) 2 ] + (3𝜇 21 − 𝜇 03 )(𝜇 21 + 𝜇 03 )[3(𝜇 30 + 𝜇 12 ) 2 − (𝜇 21 + 𝜇 03 ) 2 ],</cell></row><row><cell>𝜙 12 = (𝜇 20 − 𝜇 02 )[(𝜇 30 + 𝜇 12 ) 2 − (𝜇 21 + 𝜇 03 ) 2 ] + 4(𝜇 30 + 𝜇 12 )(𝜇 21 + 𝜇 03 ),</cell></row><row><cell>𝜙 13 = (3𝜇 21 − 𝜇 03 )(𝜇 30 + 𝜇 12 )[(𝜇 30 + 𝜇 12 ) 2 − 3(𝜇 21 + 𝜇 03 ) 2 ] − (𝜇 30 − 3𝜇 12 )(𝜇 21 + 𝜇 03 )[3(𝜇 30 + 𝜇 12 ) 2 − (𝜇 21 + 𝜇 03 ) 2 ],</cell></row><row><cell>𝜙 14 =</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 .</head><label>1</label><figDesc>Accuracy, precision and recall of the trained detector .</figDesc><table><row><cell></cell><cell>𝐴(𝐷)</cell><cell>𝑃(𝐷)</cell><cell>𝑅(𝐷)</cell></row><row><cell>Training set, 𝐷</cell><cell>0.997</cell><cell>0.905</cell><cell>0.960</cell></row><row><cell>Test set, 𝐶</cell><cell>0.9766</cell><cell>0.730</cell><cell>0.580</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0"> rd  International conference "Information Technology and Nanotechnology 2017"</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The reported study was funded by RFBR according to the research project №17-29-03190-ofi.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Computer Vision: Algorithms and Applications</title>
		<author>
			<persName><forename type="first">R</forename><surname>Szeliski</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Springer</publisher>
			<biblScope unit="page">812</biblScope>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Anomaly detection for hyperspectral imaginary</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Y</forename><surname>Denisova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Myasnikov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="287" to="296" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Local invariant feature detectors: a survey</title>
		<author>
			<persName><forename type="first">T</forename><surname>Tuytelaars</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mikolajczyk</surname></persName>
		</author>
		<idno type="DOI">10.1561/0600000017</idno>
	</analytic>
	<monogr>
		<title level="j">Foundations and trends® in computer graphics and vision</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="177" to="280" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A survey of recent advances in visual feature detection</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ding</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2014.08.003</idno>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">149</biblScope>
			<biblScope unit="page" from="736" to="751" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A fast operator for detection and precise location of distinct points, corners and centres of circular features</title>
		<author>
			<persName><forename type="first">W</forename><surname>Förstner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gülch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ISPRS intercommission conference on fast processing of photogrammetric data</title>
				<meeting>ISPRS intercommission conference on fast processing of photogrammetric data</meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="281" to="305" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A combined corner and edge detector</title>
		<author>
			<persName><forename type="first">C</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stephens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Alvey vision conference</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">50</biblScope>
			<biblScope unit="page" from="147" to="151" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Good features to track</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tomasi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. Intl Conf. on Comp. Vis. and Pat. Recog (CVPR</title>
				<meeting>Intl Conf. on Comp. Vis. and Pat. Recog (CVPR</meeting>
		<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="593" to="600" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">SUSAN -A new approach to low level image processing</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Brady</surname></persName>
		</author>
		<idno type="DOI">10.1023/A:1007963824710</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="45" to="78" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Machine learning for high-speed corner detection</title>
		<author>
			<persName><forename type="first">E</forename><surname>Rosten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Drummond</surname></persName>
		</author>
		<idno type="DOI">10.1007/11744023_34</idno>
	</analytic>
	<monogr>
		<title level="j">European Conference on Computer Vision</title>
		<imprint>
			<biblScope unit="page" from="430" to="443" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Adaptive and generic corner detection based on the accelerated segment test</title>
		<author>
			<persName><forename type="first">E</forename><surname>Mair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Hager</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Burschka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Suppa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hirzinger</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-15552-9_14</idno>
	</analytic>
	<monogr>
		<title level="j">European conference on Computer Vision</title>
		<imprint>
			<biblScope unit="page" from="183" to="196" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Corner detection based on gradient correlation matrices of planar curves</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">B</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Lovell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patcog.2009.10.017</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1207" to="1223" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Corner detection and classification using anisotropic directional derivative representations</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">L</forename><surname>Shui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">C</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TIP.2013.2259834</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="3204" to="3218" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Fast Method for Local Image Processing and Analysis</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Chernov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Myasnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Sergeyev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition and Image Analysis</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="237" to="238" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Pattern recognition by affine moment invariants</title>
		<author>
			<persName><forename type="first">J</forename><surname>Flusser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Suk</surname></persName>
		</author>
		<idno type="DOI">10.1016/0031-3203(93)90098-H</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition and Image Analysis</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="167" to="174" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Visual pattern recognition by moment invariants</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Hu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TIT.1962.1057692</idno>
	</analytic>
	<monogr>
		<title level="j">IRE transactions on information theory</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="179" to="187" />
			<date type="published" when="1962">1962</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Constructing efficient linear local features in image processing and analysis problems</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Myasnikov</surname></persName>
		</author>
		<idno type="DOI">10.1134/S0005117910030124</idno>
	</analytic>
	<monogr>
		<title level="j">Automation and Remote Control</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="514" to="527" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Machine learning: a Bayesian and optimization perspective</title>
		<author>
			<persName><forename type="first">S</forename><surname>Theodoridis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Academic Press</publisher>
			<biblScope unit="page">1062</biblScope>
			<pubPlace>San Diego</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Introduction to machine learning</title>
		<author>
			<persName><forename type="first">E</forename><surname>Alpaydin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>MIT press</publisher>
			<biblScope unit="page">584</biblScope>
			<pubPlace>Cambridge</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Elements of statistical learning: data mining, inference, and prediction</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hastie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tibshirani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Frieman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Springer</publisher>
			<biblScope unit="page">745</biblScope>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
