<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Real-time tracking of multiple objects with locally adaptive correlation filters</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Ruchay</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Chelyabinsk State University</orgName>
								<address>
									<addrLine>129 Bratiev Kashirinykh st</addrLine>
									<postCode>454001</postCode>
									<settlement>Chelyabinsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Kober</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Chelyabinsk State University</orgName>
								<address>
									<addrLine>129 Bratiev Kashirinykh st</addrLine>
									<postCode>454001</postCode>
									<settlement>Chelyabinsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">I</forename><forename type="middle">E</forename><surname>Chernoskulov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Chelyabinsk State University</orgName>
								<address>
									<addrLine>129 Bratiev Kashirinykh st</addrLine>
									<postCode>454001</postCode>
									<settlement>Chelyabinsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Real-time tracking of multiple objects with locally adaptive correlation filters</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A2C477560D888896450EBAD6066D0775</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>tracking</term>
					<term>locally adaptive filters</term>
					<term>correlation filters</term>
					<term>matching</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A tracking algorithm using locally adaptive correlation filtering is proposed. The algorithm is designed to track multiple objects with invariance to pose, occlusion, clutter, and illumination variations. The algorithm employs a prediction scheme and composite correlation filters. The filters are synthesized with the help of an iterative algorithm, which optimizes discrimination capability for each target. The filters are adapted online to targets changes using information of current and past scene frames. Results obtained with the proposed algorithm using real-life scenes, are presented and compared with those obtained with state-of-the-art tracking methods in terms of detection efficiency, tracking accuracy, and speed of processing.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Nowadays, object tracking is a widely investigated topic in engineering and computer vision <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Video surveillance, vehicle navigation, human-computer interaction, and robotics are examples of tracking applications <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. In tracking, objects are localized in a current frame automatically by applying a detection engine <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15]</ref>. A main difficulty in object tracking is that the observed scene is commonly degraded by additive noise, the presence of a cluttered background, geometric modifications such as pose changing and scaling, gesticulations, and nonuniform illumination. Additionally, eventual occlusions and real-time requirements are challenges that a modern tracking algorithm must solve.</p><p>Object tracking based on correlation-based methods are widely utilized as an attractive alternative to existing tracking algorithms <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>. Correlation filters have a good formal basis, and they can be easily implemented for real-time applications <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref>. Recognition methods involving template matching are not useful in some cases, for instance, when articulation changes global features like the object outline. So, conventional correlation filters without training may yield a poor performance to recognize objects possessing incomplete information <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref>. Adaptive approach to the filter design helps us to synthesize adaptive filters for object tracking <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>.</p><p>In this work, we propose an algorithm for object tracking based on locally adaptive correlation filtering. The algorithm is able to carry out object tracking with a high accuracy in an video without offline training. The objects are selected at the beginning of the algorithm. Afterwards, a composite correlation filter optimized for distortion tolerant pattern recognition is designed to recognize the target in the next frame. The impulse responses of optimum correlation filters are used to synthesize composite filters for distortion invariant object tracking. Two techniques are used to improve the detection performance: adaptive procedure that achieves a prespecified performance for a typical scene background, and multiple composite filters (bank of composite filters) when numerous views are available for training. The filter is dynamically adapted to each frame using information of current and past scene observations. The paper is organized as follows. Section 2 recalls the optimum composite filter design. Section 3 describes the suggested algorithm for object tracking by locally adaptive correlation filtering. Computer simulation results obtained with the proposed algorithm are presented and compared with common algorithms in terms of detection efficiency and location accuracy in section 4. Finally, section 5 presents our conclusions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Composite filter design using optimum correlation filters</head><p>We are interested in the design of a correlation filter which is able to recognize an object embedded into a disjoint background in the scene corrupted with additive noise. The designed filter should be also able to recognize geometrically distorted versions of the target. Let T = {t i (x, y); i = 1, . . . , N} be an image set containing geometrically distorted versions of the target to be recognized. The input scene is assumed to be composed by the target t(x, y) embedded into a disjoint background b(x, y) at unknown coordinates (τ x , τ y ), and the whole scene is corrupted with additive noise n(x, y), as follows:</p><formula xml:id="formula_0">f (x, y) = t(x − τ x , y − τ y ) + b(x, y)w(x − τ x , y − τ y ) + n(x, y), (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where w(x, y) is a binary function defined as zero inside the target area, and unity elsewhere. The optimum filter for detecting the target, in terms of the signal to noise ratio (SNR) and the minimum variance of measurements of location errors (LE), is the generalized matched filter (GMF) <ref type="bibr" target="#b25">[26]</ref>, whose frequency response is given by In (2), T (u, v) and W(u, v) are the Fourier transforms of t(x, y) and w(x, y), respectively; µ b is the mean value of the background b(x, y); P b (u, v) and P n (u, v) denote power spectral densities of b 0 (x, y) = b(x, y) − µ b and n(x, y), respectively. The symbol ⊗ denotes convolution.</p><formula xml:id="formula_2">H * (u, v) = T (u, v) + µ b W(u, v) P b (u, v) ⊗ |W(u, v)| 2 + P n (u, v). (<label>2</label></formula><formula xml:id="formula_3">)</formula><p>Let h i (x, y) be the impulse response of a GMF constructed for the ith available view of the target t i (x, y) in T . Let H = {h i (x, y); i = 1, . . . , N} be the set of all GMF impulse responses constructed for all training images t i (x, y). Additionally, let S = {s i (x, y); i = 1, . . . , M} be an image set containing M unwanted patterns to be rejected. We want to synthesize a filter capable to recognize all target views in T and to reject the false patterns in S , by combining the optimum filter templates contained in H, and by using only a single correlation operation. The required filter p(x, y), can be constructed as follows <ref type="bibr" target="#b25">[26]</ref>:</p><formula xml:id="formula_4">p(x, y) = N ∑ i=1 α i h i (x, y) + N+M ∑ i=N+1 α i s i (x, y),<label>(3)</label></formula><p>where the coefficients {α i ; i = 1, . . . , N + M} are chosen to satisfy prespecified output values for each pattern in U = T ∪ S . Using vectormatrix notation, we denote by R a matrix with N + M columns, where each column is the vector version of each element of U. Let a = [α i ; i = 1, . . . , N + M] T be a vector of coefficients. Thus, (3) can be rewritten as</p><formula xml:id="formula_5">p = Ra.<label>(4)</label></formula><p>Let us denote by</p><formula xml:id="formula_6">u =           1, . . . , 1 Nones , 0, . . . , 0 Mzeros           T ,</formula><p>the desired responses to the training patterns, and denote by Q the matrix whose columns are the elements of U. The response constraints can be expressed as</p><formula xml:id="formula_7">u = Q + p,<label>(5)</label></formula><p>where superscript + denotes complex conjugate. Substituting (4) into ( <ref type="formula" target="#formula_7">5</ref>), we obtain</p><formula xml:id="formula_8">u = Q + Ra.</formula><p>Thus, the solution for a, is</p><formula xml:id="formula_9">a = [Q + R] −1 u.<label>(6)</label></formula><p>Finally, substituting ( <ref type="formula" target="#formula_12">8</ref>) into (4), the solution for the composite filter is given by</p><formula xml:id="formula_10">p = R[Q + R] −1 u.<label>(7)</label></formula><p>Note that the value of the correlation peak when using the filter given in Eq. 7, is expected to be close to unity for true-class objects, and close to zero for false-class objects.</p><p>The discrimination capability (DC) is a measure of the ability of the filter to distinguish a target from unwanted objects; it is defined by <ref type="bibr" target="#b25">[26]</ref> </p><formula xml:id="formula_11">DC = 1 − |c b | 2 |c t | 2 ,</formula><p>where c b is the value of the maximum correlation sidelobe in background area and c t is the value of the correlation peak generated by the target. A DC value close to unity indicates that the filter has a good capability to distinguish between the target and any false object. Negatives values of the DC indicate that the filter is unable to detect the target. Also, if the obtained DC is greater than a prespecified threshold (DC &gt; DC th ), then the target is considered as detected and, otherwise, the target is rejected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Object tracking with locally adaptive correlation filtering</head><p>In this section we describe the proposed algorithm for object tracking based on composite correlation filtering. The proposed algorithm is robust to pose changes and appearance modifications of objects, as well as to the presence of scene noise, illumination changes, and target occlusions.</p><p>The algorithm starts with an initialization step where the objects are selected. Next, an optimum correlation filter for reliable detection and location estimation of the target is designed. Afterwards, a composite locally adaptive correlation filter is synthesized. The proposed algorithm incorporates an automatic re-initialization mechanism that reestablishes the tracking if it fails. The block diagram of the proposed algorithm is depicted in Fig. <ref type="figure" target="#fig_0">1</ref>. The detailed operation steps are explained below.</p><p>Step 1: For each object select a small target t i (x, y) from a captured scene frame f i (x, y) containing the object to be tracked.</p><p>Step 2: Synthesize an optimum correlation filter h i (x, y) with (2) for reliable detection and location estimation of the target t i (x, y) in the observed local frame l i (x, y).</p><p>Step 3: Synthesize a composite locally adaptive correlation filter p i (x, y) as follows. First, detect and locate the target by h i (x, y) filter from the observed local frame l i (x, y). If the obtained DC is greater than a prespecified threshold (DC &gt; DC rec ), then the target is considered as successfully detected, t i (x, y) added into the set T and recursion should be stopped. Otherwise, the target s i (x, y) corresponding to a false peak added into the set S . Second, synthesize a composite filter p i (x, y) with the help of <ref type="bibr" target="#b6">(7)</ref>. Third, detect and locate the target by p i (x, y) filter from the observed local frame l i (x, y) recursively until the condition DC &gt; DC rec is satisfied.</p><p>Step 4: Detect and locate the target in the observed local frame l i+1 (x, y) from a new scene frame f i+1 (x, y) by p i (x, y) filter. The coordinates of the observed local frame l i+1 (x, y) are provided by a prediction process that analyzes the motion kinematics of the target. If the obtained DC is greater than a prespecified threshold (DC &gt; DC th ), then the target is considered as successfully detected and p i (x, y) filter added to the bank B of composite correlation filters. Otherwise, the target is lost in the observed local frame l i+1 (x, y) and we recursively used the filters from bank B until condition DC &gt; DC con is satisfied.</p><p>The filter from bank B with condition DC &gt; DC con is used to a new scene frame. If the target is lost in the observed local frame l i+1 (x, y) with help the filters from bank B, then the coordinates of the target is set coordinates of the past scene frame f i (x, y) and we proceed to a new scene frame f i+2 (x, y).  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Begin</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Computer simulation</head><p>In this section, computer simulation results obtained with the proposed algorithm for object tracking are presented and compared with common algorithms in terms of detection efficiency, tracking accuracy, and speed of processing.</p><p>In order to evaluate the performance of our tracker, we conduct experiments on 100 challenging image sequences from Object Tracking Benchmark (TB-100 database) <ref type="bibr" target="#b26">[27]</ref>. These sequences cover most challenging situations in object tracking: Illumination Variation (IV), Scale Variation (SV), Occlusion (OCC), Deformation (DEF), Motion Blur (MB), Fast Motion (FM), In-Plane Rotation (IPR), Out-of-Plane Rotation (OPR), Out-of-View (OV), Background Clutters (BC), Low Resolution (LR).</p><p>For comparison, we run 3 state-of-the-art algorithms with the same initial position of the target. The first tracking algorithm (SURF) <ref type="bibr" target="#b27">[28]</ref> is based on matching of local features and descriptors. The second tracking algorithm (STRUCK) predicts the target location change between frames on the basis of structured learning <ref type="bibr" target="#b28">[29]</ref>. The third collaborative tracking algorithm (SCM) is combined a sparsity-based discriminative classifier and a sparsity-based generative model <ref type="bibr" target="#b29">[30]</ref>. The work <ref type="bibr" target="#b26">[27]</ref> performed largescale experiments to evaluate the performance of recent 33 object-tracking algorithms. Tracking algorithms STRUCK and SCM perform much better than the others.</p><p>For evaluating of detection efficiency we use an evaluation metric of the overlap score. Given a tracked bounding box r t and the ground-truth bounding extent r 0 of a target object, the overlap score is defined as</p><formula xml:id="formula_12">S = ∥r t ∩ r 0 ∥ ∥r t ∪ r 0 ∥ ,<label>(8)</label></formula><p>where ∩ and ∪ represent the intersection and union operators, respectively, and ∥ • ∥ denotes the number of pixels in a region. This average overlap score (AOS) can be used as the performance measure. In addition, the overlap scores can be used for determining whether an algorithm successfully tracks a target in a frame, by testing whether S is larger than a threshold of 0.5. Also we evaluate the tracking algorithms using the average center location error (ACLE) for all image sequences from database.</p><p>Table <ref type="table" target="#tab_1">1</ref> shows the average overlap score (AOS), the average center location errors (ACLE) and the Average Processing Time (APT) on a scena for all the tracking algorithms with the overlap threshold of 0.5. The evaluation results show that our proposed algorithm is faster than the others and more accurate in terms of the average center location errors. When an object moves fastly on the FM subset, the proposed algorithm performs much better than the others. However, the proposed algorithm does not perform well in the subset (IV, OCC, OV) due to illumination variation, and partial occlusion of the target. On the other subsets, the Struck, SCM, and the proposed algorithms outperform other the state-of-the-art algorithms. Fig. <ref type="figure" target="#fig_2">2</ref> shows sample tracking results of the proposed algorithms where the target objects are marked with red rectangles and the actually tracked objects by the proposed algorithm are marked with green rectangles. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>A tracking algorithm using locally adaptive correlation filtering is proposed. The algorithm is designed to track multiple objects with invariance to pose, partial occlusion, clutter, and illumination variations. The algorithm employs a prediction scheme and composite correlation filters. The filters are synthesized with the help of an iterative algorithm, which optimizes discrimination capability for each target. The filters are adapted online to targets changes using information of current and past scene frames. The evaluation results show that our proposed algorithm is faster than the others and more accurate in terms of the average center location errors. On the majority test sets the proposed algorithm performs much better than the state-of-the-art algorithms.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>1 ,</head><label>1</label><figDesc>3rd International conference "Information Technology and Nanotechnology 2017"</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Block diagram of the proposed tracking algorithm based on locally adaptive correlation filtering.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Results of tracking by proposed algorithm.</figDesc><graphic coords="4,173.50,434.04,118.60,88.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 .</head><label>1</label><figDesc>Evaluation results of the state-of-the-art STRUCK, SCM, SURF and proposed algorithms by the average overlap score (AOS), the average center location errors (ACLE), and the Average Processing Time (APT) .7 51.1 60.0 56.4 43.5 56.7 55.7 44.6 50.5 41.7 51.4 0.2005 68.8 STRUCK 57.5 59.3 52.4 55.6 57.0 59.0 59.1 59.9 55.9 57.3 58.9 57.8 0.2894 61.5 SCM 54.4 61.3 51.5 42.8 51.8 61.1 61.7 45.2 56.8 57.0 56.4 55.8 0.3122 64.8 SURF 35.2 37.4 25.8 41.6 39.7 37.3 23.0 45.4 36.0 34.8 46,.7 33.0 0.1668 276.6</figDesc><table><row><cell>Tracker</cell><cell>All</cell><cell>BC</cell><cell>DEF FM</cell><cell>IPR IV</cell><cell>LR</cell><cell>MB OCC OPR OV</cell><cell>SV</cell><cell>APT</cell><cell>ACLE</cell></row><row><cell cols="3">Proposed 53.3 50</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Corresponding author. Tel.: +7-351-799-7292; E-mail address: ran@csu.ru</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">3rd International conference "Information Technology and Nanotechnology 2017"</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2">Image Processing, Geoinformation Technology and Information Security / A.N. Ruchay, V.I. Kober, I.E. Chernoskulov</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_3">Powered by TCPDF (www.tcpdf.org)</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the Russian Science Foundation, grant no. 15-19-10010.</p><p>3rd International conference "Information Technology and Nanotechnology 2017"</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Performance Evaluation Software: Moving Object Detection and Tracking in Videos</title>
		<author>
			<persName><forename type="first">B</forename><surname>Karasulu</surname></persName>
		</author>
		<editor>B. Karasulu, S. Korukoglu</editor>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Springer</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Object tracking in images and videos</title>
		<author>
			<persName><forename type="first">S</forename><surname>Talmale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Talmale</surname></persName>
		</author>
		<idno>P. 15482-15486</idno>
	</analytic>
	<monogr>
		<title level="j">N.J. Janwe // International Journal Of Engineering And Computer Science</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Accurate three-dimensional pose recognition from monocular images using template matched filtering</title>
	</analytic>
	<monogr>
		<title level="m">Kenia Picos</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Victor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Vitaly</forename><surname>Diaz-Ramirez</surname></persName>
		</editor>
		<editor>
			<persName><surname>Kober</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page">63102</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Conformal parameterization and curvature analysis for 3d facial recognition</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Echeagaray-Patron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>; -Jaramillo</surname></persName>
		</author>
		<author>
			<persName><surname>Kober</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Computational Science and Computational Intelligence (CSCI)</title>
				<editor>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Echeagaray-Patron</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Miramontes</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="843" to="844" />
		</imprint>
	</monogr>
	<note>S. l. : s. n.</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">3d face recognition based on matching of facial surfaces</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Echeagaray-Patron</surname></persName>
		</author>
		<idno>P. 95980V-95980V-8</idno>
	</analytic>
	<monogr>
		<title level="m">Beatriz A. Echeagaray-Patron</title>
				<imprint>
			<publisher>Vitaly Kober</publisher>
			<date type="published" when="2015">9598. 2015</date>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A robust hog-based descriptor for pattern recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Diaz-Escobar</surname></persName>
		</author>
		<idno>P. 99712A-99712A-7</idno>
	</analytic>
	<monogr>
		<title level="j">l</title>
		<imprint>
			<date type="published" when="2016">9971. 2016</date>
			<publisher>Vitaly Kober</publisher>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Text Detection in Digital Images Captured with Low Resolution Under Nonuniform Illumination Conditions</title>
		<author>
			<persName><forename type="first">J</forename><surname>Diaz-Escobar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Julia Diaz-Escobar, Vitaly Kober // Pattern Recognition: 8th Mexican Conference, MCPR 2016</title>
				<editor>et al.</editor>
		<meeting><address><addrLine>Guanajuato, Mexico</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2016">June 22-25, 2016. 2016</date>
			<biblScope unit="page" from="3" to="12" />
		</imprint>
	</monogr>
	<note>Proceedings / Ed</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An efficient algorithm for matching of slam video sequences</title>
		<idno>P. 99712Z-99712Z-10</idno>
	</analytic>
	<monogr>
		<title level="j">S. l</title>
		<editor>Jose A. Gonzalez-Fraga, Victor H. Diaz-Ramirez, Vitaly Kober</editor>
		<imprint>
			<date type="published" when="2016">9971. 2016</date>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Effective indexing for face recognition</title>
		<idno>P. 997124- 997124-9</idno>
	</analytic>
	<monogr>
		<title level="j">l</title>
		<editor>al.</editor>
		<imprint>
			<date type="published" when="2016">9971. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Face recognition based on a matching algorithm with recursive calculation of oriented gradient histograms</title>
		<author>
			<persName><forename type="first">V</forename><surname>Vokhmintcev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Sochenkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Kuznetsov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Tikhonkikh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Doklady Mathematics</title>
		<imprint>
			<biblScope unit="volume">93</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="37" to="41" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A modified iterative closest point algorithm for shape registration</title>
		<author>
			<persName><forename type="first">D</forename><surname>Tihonkih</surname></persName>
		</author>
		<idno>P. 99712D-99712D-8</idno>
	</analytic>
	<monogr>
		<title level="m">/ Dmitrii Tihonkih, Artyom Makovetskii, Vladislav Kuznetsov</title>
				<imprint>
			<date type="published" when="2016">9971. 2016</date>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A Robust Tracking Algorithm Based on HOGs Descriptor</title>
		<author>
			<persName><forename type="first">D</forename><surname>Miramontes-Jaramillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vitaly</forename><surname>Daniel Miramontes-Jaramillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Kober</surname></persName>
		</author>
		<author>
			<persName><surname>Ram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Image Analysis, Computer Vision, and Applications: 19th Iberoamerican Congress, CIARP 2014</title>
				<editor>
			<persName><forename type="first">Eduardo</forename><surname>Bayro-Corrochano</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Edwin</forename><surname>Hancock</surname></persName>
		</editor>
		<meeting><address><addrLine>Puerto Vallarta, Mexico</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2014">November 2-5, 2014. 2014</date>
			<biblScope unit="page" from="54" to="61" />
		</imprint>
	</monogr>
	<note>Proceedings / Ed.</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Multiple objects tracking with hogs matching in circular windows</title>
		<author>
			<persName><forename type="first">D</forename><surname>Miramontes-Jaramillo</surname></persName>
		</author>
		<idno>P. 92171N-92171N-8</idno>
	</analytic>
	<monogr>
		<title level="j">l</title>
		<editor>Daniel Miramontes-Jaramillo, Vitaly Kober</editor>
		<imprint>
			<date type="published" when="2014">9217. 2014</date>
			<publisher>Victor H. Diaz-Ramirez</publisher>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Robust illumination-invariant tracking algorithm based on hogs</title>
		<author>
			<persName><forename type="first">D</forename><surname>Miramontes-Jaramillo</surname></persName>
		</author>
		<author>
			<persName><surname>Hugo Díaz-Ramírez</surname></persName>
		</author>
		<idno>P. 95991Q-95991Q-8</idno>
	</analytic>
	<monogr>
		<title level="j">l</title>
		<editor>Daniel Miramontes-Jaramillo</editor>
		<imprint>
			<date type="published" when="2015">9599. 2015</date>
			<publisher>Víctor</publisher>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Real-time tracking based on rotation-invariant descriptors</title>
		<author>
			<persName><forename type="first">D</forename><surname>Miramontes-Jaramillo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Computational Science and Computational Intelligence (CSCI)</title>
				<editor>
			<persName><forename type="first">Daniel</forename><surname>Miramontes-Jaramillo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Vitaly</forename><surname>Kober</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="volume">00</biblScope>
			<biblScope unit="page" from="543" to="546" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Objects tracking with adaptive correlation filters and kalman filtering</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Ontiveros-Gallardo</surname></persName>
		</author>
		<idno>P. 95980X-95980X-8</idno>
	</analytic>
	<monogr>
		<title level="j">l</title>
		<editor>Sergio E. Ontiveros-Gallardo</editor>
		<imprint>
			<date type="published" when="2015">9598. 2015</date>
			<publisher>Vitaly Kober</publisher>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Correlation-based tracking using tunable training and kalman prediction</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Ontiveros-Gallardo</surname></persName>
		</author>
		<idno>P. 997129-997129-9</idno>
		<editor>Sergio E. Ontiveros-Gallardo</editor>
		<imprint>
			<date type="published" when="2016">9971. 2016</date>
			<publisher>Vitaly Kober</publisher>
		</imprint>
	</monogr>
	<note>S. l. : s. n.</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A correlation-based algorithm for recognition and tracking of partially occluded objects</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ruchay</surname></persName>
		</author>
		<idno>P. 99712R-99712R-9</idno>
	</analytic>
	<monogr>
		<title level="m">Alexey Ruchay</title>
				<imprint>
			<publisher>Vitaly Kober</publisher>
			<date type="published" when="2016">9971. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Facial recognition using composite correlation filters designed with multiobjective combinatorial optimization</title>
		<author>
			<persName><forename type="first">Andres</forename><surname>Cuevas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vitaly</forename><surname>Diaz-Ramirez</surname></persName>
		</author>
		<author>
			<persName><surname>Kober</surname></persName>
		</author>
		<idno>P. 921710-921710-8</idno>
	</analytic>
	<monogr>
		<title level="m">Leonardo Trujillo</title>
				<imprint>
			<date type="published" when="2014">9217. 2014</date>
		</imprint>
	</monogr>
	<note>s. n.</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Adaptive composite filters for pattern recognition in nonoverlapping scenes using noisy training images</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Aguilar-González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mario</forename><surname>Pablo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vitaly</forename><surname>Aguilar-González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Víctor</forename><surname>Kober</surname></persName>
		</author>
		<author>
			<persName><surname>Hugo Díaz-Ramírez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Pattern Recogn. Lett</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="83" to="92" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Object Tracking in Nonuniform Illumination Using Space-Variant Correlation Filters</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">H</forename><surname>Díaz-Ramírez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Víctor Hugo Díaz-Ramírez, Kenia Picos, Vitaly Kober // Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 18th Iberoamerican Congress, CIARP 2013</title>
				<meeting><address><addrLine>Havana, Cuba; Berlin; Heidelberg; Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">November 20-23, 2013. 2013</date>
			<biblScope unit="page" from="455" to="462" />
		</imprint>
	</monogr>
	<note>Proceedings, Part II / Ed. by José Ruiz-Shulcloper, Gabriella Sanniti di Baja</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Real-time tracking of multiple objects using adaptive correlation filters with complex constraints</title>
		<author>
			<persName><forename type="first">H</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viridiana</forename><surname>Diaz-Ramirez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vitaly</forename><surname>Contreras</surname></persName>
		</author>
		<author>
			<persName><surname>Kober</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Kenia Picos // Optics Communications</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">309</biblScope>
			<biblScope unit="page" from="265" to="278" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Target tracking in nonuniform illumination conditions using locally adaptive correlation filters</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">H</forename><surname>Diaz-Ramirez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><surname>Diaz-Ramirez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Kenia Picos</title>
				<imprint>
			<publisher>Optics Communications</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">323</biblScope>
			<biblScope unit="page" from="32" to="43" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Robust Face Tracking with Locally-Adaptive Correlation Filtering</title>
		<author>
			<persName><forename type="first">Leopoldo</forename><forename type="middle">N</forename><surname>Gaxiola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hugo</forename><surname>Víctor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Juan</forename><forename type="middle">J</forename><surname>Díaz-Ramírez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Tapia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Image Analysis, Computer Vision, and Applications: 19th Iberoamerican Congress, CIARP 2014</title>
				<editor>
			<persName><forename type="first">Eduardo</forename><surname>Bayro-Corrochano</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Edwin</forename><surname>Hancock</surname></persName>
		</editor>
		<meeting><address><addrLine>Puerto Vallarta, Mexico</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2014">November 2-5, 2014. 2014</date>
			<biblScope unit="page" from="925" to="932" />
		</imprint>
	</monogr>
	<note>Proceedings / Ed.</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Target tracking with dynamically adaptive correlation</title>
		<author>
			<persName><forename type="first">Leopoldo</forename><forename type="middle">N</forename><surname>Gaxiola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Juan</forename><forename type="middle">J</forename><surname>Diaz-Ramirez</surname></persName>
		</author>
		<author>
			<persName><surname>Tapia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Pascuala Garcia-Martinez // Optics Communications</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">365</biblScope>
			<biblScope unit="page" from="140" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Adaptive composite filters for pattern recognition in linearly degraded and noisy scenes</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Ramos-Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Erika</surname></persName>
		</author>
		<author>
			<persName><surname>Ramos-Michel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Optical Engineering</title>
				<imprint>
			<publisher>Vitaly Kober</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="47204" to="047204" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Object tracking benchmark</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1834" to="1848" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">Object detection and recognition by using enhanced speeded up robust feature</title>
		<author>
			<persName><forename type="first">T</forename><surname>Al-Asadi</surname></persName>
		</author>
		<editor>A.J. Obaid</editor>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>International Journal of Computer Science and Network Security</publisher>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="66" to="71" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Structured output tracking with kernels</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">H S</forename><surname>Torr</surname></persName>
		</author>
		<author>
			<persName><surname>Struck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Philip</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sam</forename><surname>Torr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amir</forename><surname>Hare</surname></persName>
		</author>
		<author>
			<persName><surname>Saffari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Computer Vision (ICCV 2011)</title>
				<imprint>
			<date type="published" when="2011">2011. 2011</date>
			<biblScope unit="volume">00</biblScope>
			<biblScope unit="page" from="263" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Robust object tracking via sparsity-based collaborative model</title>
		<author>
			<persName><forename type="first">W</forename><surname>Zhong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<meeting>the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)<address><addrLine>Washington, DC, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1838" to="1845" />
		</imprint>
	</monogr>
	<note>CVPR &apos;12</note>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<ptr target="www.tcpdf.org)PoweredbyTCPDF(www.tcpdf.org)PoweredbyTCPDF(www.tcpdf.org" />
		<title level="m">Chernoskulov 3rd International conference</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
		<respStmt>
			<orgName>Geoinformation Technology and Information Security</orgName>
		</respStmt>
	</monogr>
	<note>Information Technology and Nanotechnology</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
