<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Dimension Reduction Methods for Iris Recognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Pavel</forename><surname>Moravec</surname></persName>
							<email>pavel.moravec@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">FEECS</orgName>
								<orgName type="institution">V ŠB -Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava-Poruba</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Václav</forename><surname>Snášel</surname></persName>
							<email>vaclav.snasel@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">FEECS</orgName>
								<orgName type="institution">V ŠB -Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava-Poruba</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Dimension Reduction Methods for Iris Recognition</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">95C98CA1063BC2D8C7588F9DA00E4E73</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>SVD</term>
					<term>FastMap</term>
					<term>information retrieval</term>
					<term>SDD</term>
					<term>iris recognition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, we compare performance of several dimension reduction techniques, namely LSI, FastMap, and SDD in Iris recognition. We compare the quality of these methods from both the visual impact, and quality of generated "eigenirises".</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Methods of human identification using biometric features like fingerprint, hand geometry, face, voice and iris are widely studied.</p><p>A human eye iris has its unique structure given by pigmentation spots, furrows and other tiny features which are stable throughout life. It is possible to scan an iris without physical contact in spite of wearing contact lenses or eyeglasses. The iris is hard to forge which makes the iris a suitable object for the identification of people. Iris recognition seems to be more reliable than other biometric techniques like face recognition <ref type="bibr" target="#b2">[3]</ref>. Iris biometrics systems for both private and public use have been designed and deployed commercially by NCR, Oki, IriScan, BT, US Sandia Labs, and others.</p><p>In this paper, we use the Petland's approach to image retrieval: image vectors of complete images of the size width × height of the image <ref type="bibr" target="#b12">[13]</ref> build the feature vectors.</p><p>To fight the high dimension of image vector, we can extract several features which represent the image and concatenate them into a feature vector. The feature extraction methods can use different aspects of images as the features, typically the color features (histograms), shape features (moments, contours, templates), texture features and others (e.g. eigenvectors). Such methods are either using a heuristics based on the known properties of the image collection, or are fully automatic and may use the original image vectors as an input.</p><p>In this paper we will concentrate on the last category -other feature extraction methods which use known dimension reduction techniques and clustering for automatic feature extraction.</p><p>Singular value decomposition (SVD) was already successfully used for automatic feature extraction. In case of face collection (such as our test data), the base vectors can be interpreted as images, describing some common characteristics of several faces. These base vectors are often called eigenfaces. For a detailed description of eigenfaces, see <ref type="bibr" target="#b12">[13]</ref>.</p><p>K. Richta, J. Pokorný, V. Snášel (Eds.): Dateso 2009, pp. 80-89, ISBN 978-80-01-04323-3.</p><p>However SVD is not suitable for huge collections and is computationally expensive, so other methods of dimension reduction were proposed. We test two of them -Semi-Discrete decomposition.</p><p>Recently, Toeplitz matrix minimal Eigenvalues are also playing a role towards image description and feature extraction <ref type="bibr" target="#b9">[10]</ref>. This approach presents a method for the reduction of image feature points as it deals with the geometric relation between the points rather than their geometric position <ref type="bibr" target="#b10">[11]</ref>. This can reduce the characteristic or feature points number from n to at least n 10 decreasing the computation level and hence the task time.</p><p>The rest of this paper is organized as follows. The second section explains used dimension reduction methods. In the third section, we briefly describe qualitative measures used for evaluation of our tests. In the next section, we supply results of tests for several methods on ORL face collection. In conclusions we give ideas for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Dimension Reduction Methods</head><p>We used three methods of dimension reduction for our comparison -Singular Value Decomposition, Semi-Discrete Decomposition and FastMap, which are briefly described bellow.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Singular Value Decomposition</head><p>SVD <ref type="bibr" target="#b1">[2]</ref> is an algebraic extension of classical vector model. It is similar to the Principal components analysis (PCA) method, which was originally used for the generation of eigenfaces. Informally, SVD discovers significant properties and represents the images as linear combinations of the base vectors. Moreover, the base vectors are ordered according to their significance for the reconstructed image, which allows us to consider only the first k base vectors as important (the remaining ones are interpreted as "noise" and discarded). Furthermore, SVD is often referred to as more successful in recall when compared to querying whole image vectors <ref type="bibr" target="#b1">[2]</ref>.</p><p>Formally, we decompose the matrix of images A by singular value decomposition (SVD), calculating singular values and singular vectors of A.</p><p>We have matrix A, which is an n × m rank-r matrix and values σ 1 , . . . , σ r are calculated from eigenvalues (λ i ) of matrix AA T as σ i = √ λ i . Based on them, we can calculate column-orthonormal matrices U = (u 1 , . . . , u r ) and V = (v 1 , . . . , v r ), where U T U = I n a V T V = I m , and a diagonal matrix Σ = diag(σ 1 , . . . , σ r ), where</p><formula xml:id="formula_0">σ i &gt; 0, σ i ≥ σ i+1 . The decomposition A = U ΣV T</formula><p>is called singular decomposition of matrix A and the numbers σ 1 , . . . , σ r are singular values of the matrix A. Columns of U (or V ) are called left (or right) singular vectors of matrix A. Now we have a decomposition of the original matrix of images A. We get r nonzero singular numbers, where r is the rank of the original matrix A. Because the singular values usually fall quickly, we can take only k greatest singular values with the corresponding singular vector coordinates and create a k-reduced singular decomposition of A.</p><p>Let us have k (0 &lt; k &lt; r) and singular value decomposition of</p><formula xml:id="formula_1">A A = U ΣV T ≈ A k = (U k U 0 ) Σ k 0 0 Σ 0 V T k V T 0 We call A k = U k Σ k V T k a k-reduced singular value decomposition (rank-k SVD). Instead of the A k matrix, a matrix of image vectors in reduced space D k = Σ k V T k</formula><p>is used in SVD as the representation of image collection. The image vectors (columns in D k ) are now represented as points in k-dimensional space (the feature-space). For an illustration of rank-k SVD see Figure <ref type="figure" target="#fig_0">1</ref>. Rank-k SVD is the best rank-k approximation of the original matrix A. This means that any other decomposition will increase the approximation error, calculated as a sum of squares (Frobenius norm) of error matrix B = A−A k . However, it does not implicate that we could not obtain better precision and recall values with a different approximation.</p><p>To execute a query Q in the reduced space, we create a reduced query vector q k = U T k q (another approach is to use a matrix</p><formula xml:id="formula_2">D ′ k = V T k instead of D k , and q ′ k = Σ −1 k U T k q</formula><p>). Instead of A against q, the matrix D k against q k (or q ′ k ) is evaluated. Once computed, SVD reflects only the decomposition of original matrix of images. If several hundreds of images have to be added to existing decomposition (folding-in), the decomposition may become inaccurate. Because the recalculation of SVD is expensive, so it is impossible to recalculate SVD every time images are inserted. The SVD-Updating <ref type="bibr" target="#b1">[2]</ref> is a partial solution, but since the error slightly increases with inserted images. If the updates happen frequently, the recalculation of SVD may be needed soon or later.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">SDD Method</head><p>Semidiscrete decomposition (SDD) is one of other LSI methods, proposed recently for text retrieval in <ref type="bibr" target="#b7">[8]</ref>. As mentioned earlier, the rank-k SVD method (called truncated SVD by authors of semidiscrete decomposition) produces dense matrices U and V , so</p><formula xml:id="formula_3">X k D k Y k T Values {-1,0,1} Values {-1,0,1} Nonnegative real values</formula><p>Fig. <ref type="figure">2</ref>. "rank-k" SDD the resulting required storage may be even larger than the one needed by the original term-by-document matrix A.</p><p>To improve the required storage size and query time, the semidiscrete decomposition was defined as</p><formula xml:id="formula_4">A ≈ A k = X k D k Y T k</formula><p>, where each coordinate of X k and Y k is constrained to have entries from the set ϕ = {−1, 0, 1}, and the matrix D k is a diagonal matrix with positive coordinates.</p><p>The SDD does not reproduce A exactly, even if k = n, but it uses very little storage with respect to the observed accuracy of the approximation. A rank-k SDD (although from mathematical standpoint it is a sum on rank-1 matrices) requires the storage of k(m + n) values from the set {−1, 0, 1} and k scalars. The scalars need to be only single precision because the algorithm is self-correcting. The SDD approximation is formed iteratively.</p><p>The optimal choice of the triplets (x i , d i , y i ) for given k can be determined using greedy algorithm, based on the residual R k = A − A k−1 (where A 0 is a zero matrix).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">FastMap</head><p>FastMap <ref type="bibr" target="#b5">[6]</ref> is a pivot-based technique of dimension reduction, suitable for Euclidean spaces.</p><p>In first step, it chooses two points (feature vectors) from the matrix A, which should be most distant for calculated reduced dimension. Because it would be expensive to calculate distances between all points, it uses following heuristics (all chosen points are image vectors from matrix A):</p><p>1. A random point c 0 is chosen. 2. The point b i having maximal distance δ(c i , b i ) from c i is chosen, and based on it we select the point a i with maximal distance δ(b i , a i ) 3. We iteratively repeat step 2 with c i+1 = a i (authors suggest 5 iterations). 4. Points a = a i and b = b i in the last iteration are pivots for the next reduction step.</p><p>In second step (having the two pivots a, b), we use the cosine law to calculate position of each point on line joining a and b. The coordinate x i of point p i is calculated as</p><formula xml:id="formula_5">x i = δ 2 (a i , p i ) + δ 2 (a i , b i ) − δ 2 (b i , p i ) 2δ(a i , b i )</formula><p>and the distance function for next reduction step is modified to</p><formula xml:id="formula_6">δ ′2 (p ′ i , p ′ j ) = δ 2 (p i , p j ) − (x i − x j ) 2</formula><p>The pivots in original and reduced space are recorded and when we need to process a query, it is projected using the second step of projection algorithm only. Once projected, we can again use the original distance function in reduced space.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Qualitative Measures of Retrieval Methods</head><p>Since we need an universal evaluation of any retrieval method, we use some measures to determine quality of such method. In case of Information Retrieval we usually use two such measures -precision and recall. Both are calculated from the number of objects relevant to the query Rel -determined by some other method, e.g. by manual annotation of given collection and the number of retrieved objects Ret. Based on these numbers we define precision (P ) as a fraction of retrieved relevant objects in all retrieved objects and recall (R) as a fraction of retrieved relevant objects in all relevant objects. Formally:</p><formula xml:id="formula_7">P = |Rel ∩ Ret| |Ret| and R = |Rel ∩ Ret| |Rel|</formula><p>So we can say that recall and precision denote, respectively, completeness of retrieval and purity of retrieval. Unfortunately, it was observed that with the increase of recall, the precision usually decreases <ref type="bibr" target="#b11">[12]</ref>. This means that when it is necessary to retrieve more relevant objects, a higher percentage of irrelevant objects will be probably obtained, too.</p><p>For the overall comparison of precision and recall across different methods on a given collection, we usually use the technique of rank lists <ref type="bibr" target="#b0">[1]</ref>, where we first sort the distances from smallest to greatest and then go down through the list and calculate maximal precision for recall closest to each of the 11 standard recall levels (0.0, 0.1, 0.2, . . . , 0.9, 1.0). If we are unable to calculate precision on i-th recall level, we take the maximal precision for the recalls between i − 1-th and i + 1-th level. From all levels, we calculate mean average, which is a single-value characteristics of overall precisionrecall ratio.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experimental Results</head><p>For testing of the different methods, we used iris collection consisting of 384 irises. The iris were scanned by TOPCON optical device connected to the CCD Sony camera. The acquired digitized image is RGB of size 576 × 768 pixels. Only the red (R) component of the RGB image has been used in our experiments because it appears to be more reliable than recognition based on green or blue components or converting the irises to grayscale first. It is in accord with <ref type="bibr" target="#b3">[4]</ref>, where near-infrared wavelengths are used anyway. We have excluded one of the three irises for each eye for further querying (so that the query iris would not be included in the collection and skew the query results), which led to a collection of 256 irises of 64 people.</p><p>An example of several irises from the collection is shown in Figure <ref type="figure" target="#fig_1">3</ref>, the first 12 query vectors are shown in Figure <ref type="figure">8</ref>. We did not isolate the central part and eyelids to provide comparable results with <ref type="bibr" target="#b8">[9]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Generated "Eigenirises" and Reconstructed Images</head><p>Many of tested methods were able to generate a set of base images, which could be considered to be "eigenirises" as is the case of PCA, SVD and several other methods. We are going to provide examples of both factors (base vectors) -"eigenirises" and reconstructed images which can be obtained from regenerated A k . We calculated results for all methods in several dimensions, for the demonstration images we will use k = 64. We do not provide these images for FastMap, where it is not possible (we could have provided the images used as pivots in each step of FastMap process).</p><p>With SVD, we obtain factors with different generality, the most general being among the first. The first few are shown in figure <ref type="figure">4</ref>. The eigenirises with higher index bring more details to reconstructed images.</p><p>The reconstructed images for rank-64 SVD method are somewhat blurred, but generally still recognizable, which can bee observed in figure <ref type="figure">5</ref>.</p><p>The SDD method differs slightly from previous methods, since each factor contains only values {−1, 0, 1}. Gray in the factors shown in figure <ref type="figure">6</ref> represents 0; −1 and 1 are represented with black and white respectively.</p><p>The images in figure <ref type="figure">7</ref> are reconstructed least exactly from all methods (although consistently), but this is to be expected due to the three-valued encoding of base vectors. One may note a general loss of fine details, which is unfortunate, since it means that the query process would be highly affected and the retrieval results poor. First, we calculated the mean average precision (MAP) for all relevant images in rank lists. The relative MAPs (against original matrix A -100%) are shown in Table <ref type="table" target="#tab_0">1</ref>.</p><p>One would suspect, that querying in original dimension would provide better results that any of the dimension reduction methods. In Table <ref type="table" target="#tab_1">2</ref> we show the number of queries, where the first returned iris was of the same person (out of 128).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this paper, we have compared several dimension reduction methods on real-live image data (using L 2 metrics). Whilst the SVD is known to provide quality results, it is computationally expensive and in case we only need to beat the "curse of dimensionality" by reducing the dimension, FastMap may suffice.</p><p>There are some other newly-proposed methods, which may be interesting for future testing, e.g. the SparseMap <ref type="bibr" target="#b6">[7]</ref>. Additionally, faster pivot selection technique for FastMap may be considered. We may also benefit from the use of Toeplitz matrices and their minimal eigenvalues relation. What we currently need is a better iris segmentation, i.e. removing the central piece (in our case in light reflection), eyelids, and identifying the exact iris position, such as methods described in <ref type="bibr" target="#b4">[5]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. rank-k SVD</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Several irises from the collection</figDesc><graphic coords="6,228.95,121.63,124.82,100.22" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .Fig. 5 .Fig. 6 .Fig. 7 .Fig. 8 .</head><label>45678</label><figDesc>Fig. 4. First 64 eigenirises (out of possible 256) for SVD method</figDesc><graphic coords="7,228.95,297.79,124.82,100.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Mean average precision of iris comparison (VSM: 49%)</figDesc><table><row><cell></cell><cell cols="2">Reduction method</cell><cell></cell></row><row><cell>k</cell><cell>FastMap</cell><cell>SVD</cell><cell>SDD</cell></row><row><cell>4</cell><cell>25%</cell><cell>14%</cell><cell>3%</cell></row><row><cell>8</cell><cell>33%</cell><cell>31%</cell><cell>3%</cell></row><row><cell>16</cell><cell>39%</cell><cell>44%</cell><cell>4%</cell></row><row><cell>32</cell><cell>42%</cell><cell>46%</cell><cell>4%</cell></row><row><cell>64</cell><cell>45%</cell><cell>48%</cell><cell>5%</cell></row><row><cell>128</cell><cell>47%</cell><cell>49%</cell><cell>5%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Number of queries, where the person was successfully identified (VSM: 83)</figDesc><table><row><cell></cell><cell cols="2">Reduction method</cell><cell></cell></row><row><cell>k</cell><cell>FastMap</cell><cell>SVD</cell><cell>SDD</cell></row><row><cell>4</cell><cell>32</cell><cell>37</cell><cell>2</cell></row><row><cell>8</cell><cell>47</cell><cell>56</cell><cell>3</cell></row><row><cell>16</cell><cell>54</cell><cell>75</cell><cell>3</cell></row><row><cell>32</cell><cell>60</cell><cell>81</cell><cell>5</cell></row><row><cell>64</cell><cell>64</cell><cell>82</cell><cell>4</cell></row><row><cell>128</cell><cell>96</cell><cell>84</cell><cell>5</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Modern Information Retrieval</title>
		<author>
			<persName><forename type="first">R</forename><surname>Baeza-Yates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ribeiro-Neto</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Addison Wesley</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Computational Methods for Intelligent Information Access</title>
		<author>
			<persName><forename type="first">M</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dumais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Letsche</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1995 ACM/IEEE Supercomputing Conference</title>
				<meeting>the 1995 ACM/IEEE Supercomputing Conference<address><addrLine>San Diego, California, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Statistical richness of visual phase information: Update on recognizing persons by iris patterns</title>
		<author>
			<persName><forename type="first">J</forename><surname>Daugman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="25" to="38" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The importance of being random: statistical principles of iris recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Daugman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="279" to="291" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">New methods in iris recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Daugman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics, Part B</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1167" to="1175" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">FastMap: A Fast Algorithm for Indexing, Data-Mining and Visualization of Traditional and Multimedia Datasets</title>
		<author>
			<persName><forename type="first">C</forename><surname>Faloutsos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM SIGMOD Record</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="163" to="174" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Properties of embedding methods for similarity searching in metric spaces</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">R</forename><surname>Hjaltason</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Samet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="530" to="549" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Computation and uses of the semidiscrete matrix decomposition</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">G</forename><surname>Kolda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>O'leary</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Information Processing</title>
		<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Iris Recognition Using the SVD-Free Latent Semantic Indexing. In MDM/KDD2004 -Fifth International Workshop on Multimedia Data Mining &quot;Mining Integrated Media and Complex Data</title>
		<author>
			<persName><forename type="first">P</forename><surname>Praks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Machala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">conjunction with KDD&apos;2004 -The 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Section 2. Multimedia Data Mining: Techniques and Applications</title>
				<meeting><address><addrLine>Seattle, WA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="67" to="71" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Image Analysis for Object Recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>Saeed</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Bialystok Technical University Press</publisher>
			<pubPlace>Bialystok, Poland</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A speech-and-speaker identification system: Feature extraction, description, and classification of speech-signal image</title>
		<author>
			<persName><forename type="first">K</forename><surname>Saeed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Nammous</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Industrial Electronics</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="887" to="897" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Introduction to Modern Information Retrieval</title>
		<author>
			<persName><forename type="first">G</forename><surname>Salton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mcgill</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1983">1983</date>
			<publisher>McGraw-ill</publisher>
			<pubPlace>New York, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Eigenfaces for recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Turk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pentland</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Cognitive Neuroscience</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="71" to="86" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
