<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A fair ranking method for image database retrieval</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">L</forename><surname>Costantini</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Fondazione Ugo Bordoni</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Applied Electronics Department</orgName>
								<orgName type="institution">University of Roma TRE</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">L</forename><surname>Capodiferro</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Fondazione Ugo Bordoni</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">M</forename><surname>Carli</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Applied Electronics Department</orgName>
								<orgName type="institution">University of Roma TRE</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">A</forename><surname>Neri</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Applied Electronics Department</orgName>
								<orgName type="institution">University of Roma TRE</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A fair ranking method for image database retrieval</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9D012AE7BC642846249C405AB434F775</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T16:19+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Fair ranking</term>
					<term>database extraction</term>
					<term>texture analysis</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work aims at organizing the results of query-by-example image database management system so that also the less relevant objects still representing a minority, are presented to the user. Usually, during image retrieval, the objects very similar to the query are ranked in the first positions and shown to the user. On the contrary, objects that slightly differ from the target image, and that are less numerous, are hardly ever shown to the user. The proposed method increases the chances for a fair retrieval of multimedia data from digital databases.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Unsupervised or human guided object retrieval is one of the basic functionalities of content-centric Internet services. The first generation of digital database retrieval systems was built on the use of metadata describing the semantic content of a multimedia database, usually extracted by manual procedures. However, Future Internet content aware services require more efficient functionalities for inspection, crawling, recognition, categorization, and indexing of multimedia content with minimal human intervention.</p><p>The research activity on Content Based Image Retrieval (CBIR) has been focused on the analysis of local and global image features for the refinement of the coarse annotation, extracted from the WEB pages containing the image, and for re-ranking the results obtained by searching those annotations <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>. In fact, up to the present, traditional search based on keywords has focused on direct use of image content, because of the difficulty in providing at least a sketch of the desired picture. Nevertheless, the widespread diffusion of cameraphones seems to open new opportunities to CBIR for the easiness in providing samples taken from the real word when browsing an image archive by means of a cell phone <ref type="bibr" target="#b1">[2]</ref>.</p><p>Current CBIR techniques are based on comparisons of global features like dominant colors, object shapes, textures as well as of local features relating to the most salient points, like corners. They evaluate a similarity index computed on the basis of the features extracted from the reference template and those of the candidate image. The similarity index can either directly employ these features, as in the case of adoption, as index, of the maximum likelihood or of the belief functional, or can be based on the comparison of the statistical distribution of the features, as in the case of the use of the Kullback-Leibler divergence. On the other hand, irrespective of the set of local or global features employed for comparing image and video contents, current search engines produce a list of candidates usually sorted with respect to the similarity index. The consequence is that in the presence of several clusters of candidates, the search engine will report the elements of the most similar images and, if the number of elements belonging to that cluster is large enough, the other clusters may be completely cluttered by the dominant one.</p><p>The aim of this contribution is to propose a technique for sorting the list of potential candidates in such a way that all the relevant clusters are represented in the list displayed to the user, thus preserving the diversity of possible solutions.</p><p>More specifically, a pre-selection of candidates by considering a similarity index accounting only for a subset of the whole feature set is performed. This subset can be either predetermined by referring, for instance, to those features like morphology, that the user regards as more relevant, or automatically selected as those maximizing like a partial similarity index, as described in the next section. Then, the candidate set is clustered by extracting the local maxima of the similarity index evaluated on the full set of features. Finally, for each local maximum, including the absolute one, a representative set of images to be presented as query results is selected. The cardinality of each set may be proportional to the similarity index.</p><p>With respect to the evaluation of the partial similarity on the basis of a predefined feature subset, we observe that the relative importance of each feature strictly depends on the user needs. Therefore, although default values for different applications and contests can be specified, the user should have the possibility to provide this information to the search engine.</p><p>In the examples reported in this contribution, it is assumed that morphology is the most discriminating global feature for the user, while color based features act as elements discriminating the various clusters. In order to verify the performances of the proposed method in highlighting the differences among the search outcomes, we simplified the set of features employed in the comparison. Thus, with respect to the MPEG-7 ensemble of global and local features, see <ref type="bibr" target="#b0">[1]</ref>, we focused our attention on the texture morphology, as described by the statistics of the outputs of a set of steerable filters, and on the dominant colors. The simulation results confirm that search results clustering driven by the relative maxima of the similarity index is a powerful technique for preserving web diversity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">The fair similarity ranking</head><p>Let us model the global feature space V as an M -dimensional vector space, partitioned into L subspaces W j , each one corresponding to a particular feature set, so that V can be written as direct sum of these subspaces:</p><formula xml:id="formula_0">V = W 1 ⊕ W 2 ⊕ • • •⊕W M .</formula><p>Given two elements x, y ∈ V , their similarity s(x, y) can be computed:</p><formula xml:id="formula_1">s(x, y) = L i=1 s i (x i , y i ),<label>(1)</label></formula><p>where s i (x i , y i ) is the similarity index corresponding to the i-th feature set. Let us consider, without loss of generality, the subset V consisting of N subspaces corresponding to the first N features. Let C k,M be the set of k-combinations of the N -element index set, then for any</p><formula xml:id="formula_2">k-combination i = (i 1 , i 2 , . . . , i k−1 , i k ) let us denote with W i the set obtained as direct sum of V i1 , V i2 , . . . , V i k−1 , V i k , i.e., W i = V i1 ⊕ V i2 ⊕ . . . ⊕ V i k−1 ⊕ V i k .</formula><p>Then given two elements x, y ∈ V we define as partial similarity sk,N (x, y) based on the best k features out of N as</p><formula xml:id="formula_3">sk,N (x, y) = M ax i ∈ C k,N k h=1 s i h (x i h , y i h ).</formula><p>(</p><formula xml:id="formula_4">)<label>2</label></formula><p>We observe that the concept of similarity can be easily replaced by the concept of dissimilarity. Consequently, we define as partial dissimilarity dk,N (x, y) based on the best k features out of N as</p><formula xml:id="formula_5">dk,N (x, y) = min i ∈ C k,N k h=1 di h (x i h , y i h ).<label>(3)</label></formula><p>The use of dissimilarity in place of similarity is suggested by the fact that some effective indicators widely used in statistics like the Kullback-Leibler divergence do not satisfy the triangle inequality. Thus, for a given template x, the set A(x) of candidates can now be computed by thresholding the partial dissimilarity, namely:</p><formula xml:id="formula_6">A(x) = {y dk,N (x, y) ≤ γ}.<label>(4)</label></formula><p>Finally, clustering of the candidate set A(x) on the basis of the whole feature space V is performed. The dissimilarity threshold γ V employed in the cluster formation is, in general, lower than the threshold γ used in the candidate selection phase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Laguerre Gauss decomposition and texture classification</head><p>Texture segmentation and classification has been intensively studied and many texture description algorithms have been proposed including statistical <ref type="bibr" target="#b5">[6]</ref> and feature based methods <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b7">[8]</ref>. In many applications, segmentation and classification need to be performed without taking object orientations into account, so rotation invariant descriptions of texture patterns are employed.</p><p>In this context, a relevant role has been played by the Circular Harmonic Functions (CHF), initially introduced in the field of optical processing for rotation invariant pattern recognition, and recently casted in a wavelet decomposition scheme <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b10">[11]</ref>. Since CHFs are intrinsically tuned to image features like edges, lines, equiangular forks, orthogonal crosses without regard to their orientations, they are good candidates for such applications. A detailed description of Laguerre Gauss wavelet decomposition can be found in <ref type="bibr" target="#b9">[10]</ref>.</p><p>In this paper, we use the pixel classification algorithm employed by Randen and Husoy in their feature set comparisons <ref type="bibr" target="#b8">[9]</ref> for segmentation. Then we utilize a multiscale segmentation method, <ref type="bibr" target="#b9">[10]</ref>, starting from CHF moments. These moments are obtained by the decomposition of a template on complex CHFs that form a complete orthogonal basis on a unit disc. Due to a general property of the CHFs, a pattern can be easily steered by multiplying the expansion coefficients by complex exponential factors whose phase is proportional to the rotation angle, <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b12">[13]</ref>. As a consequence, rotation invariants can be easily obtained by considering the magnitude of the expansion coefficients.</p><p>In our method, the texture classification is performed by considering both the structural (morphological) and the color components. To represent a texture pattern morphology, the expansion coefficients of the Laguerre-Gauss transform of the luminance component Y corresponding to a finite set of N K order pairs {(n, k)| n = 1, ..., N, k = 0, ..., K − 1} are employed. In particular n = 1, .., 3 and k = 0, .., 3 at 3 different resolutions have been employed in the examples reported in the following. Since the magnitude of the transform coefficients is rotation invariant, we use their statistics. In particular, as already stated in <ref type="bibr" target="#b9">[10]</ref> it is possible to characterize the marginal density of a wavelet decomposition by using a generalized Gaussian function. This distribution is characterized by two parameters, α and β, directly related to the mean and variance distribution. Thus, mean and variance are sufficient to describe the statistical properties of the Laguerre-Gauss transform coefficient magnitudes of a portion of an image. As a consequence, for each point of a given region of interest (ROI) we first compute the KN Laguerre-Gauss transform coefficients. Then, we evaluate the mean and the variance of their magnitudes inside the ROI, so that to each pattern a morphology feature vector of length 2N K is associated.</p><p>The partial dissimilarity index dY (x, z) associated to the structure of two textures x, z is then evaluated as the the Kullback-Leibler distance KLD among the generalized Gaussian probability density functions modeling the statistical behavior of the wavelet coefficient magnitudes:</p><formula xml:id="formula_7">sY (x, z) = KLD W L (n) k Y x (b, 0, a) , W L (n) k Y z (b, 0, a) . (<label>5</label></formula><formula xml:id="formula_8">)</formula><p>where</p><formula xml:id="formula_9">W L (n) k Y x (b, 0, a)</formula><p>represents the Laguerre-Gauss transform at location b, rotation ϕ, and scale a.</p><p>In order to evaluate the dissimilarity index related to the chromatic components Cb, Cr we characterize them by their mean and their centered moments of the second and third order <ref type="bibr" target="#b1">[2]</ref>, computed as follows:</p><formula xml:id="formula_10">µ = 1 N r N c Nr i=1 Nc j=1 p (i, j)<label>(6)</label></formula><formula xml:id="formula_11">σ =   1 N r N c Nr i=1 Nc j=1 (p (i, j) − µ) 2   1 / 2 (7) t =   1 N r N c Nr i=1 Nc j=1 (p (i, j) − µ) 3   1 / 3 (<label>8</label></formula><formula xml:id="formula_12">)</formula><p>where p represents the pixel chromatic component at location (i, j), N r and N c respectively represent the height and the width of the image. The color information is therefore represented by a features vector of six elements. In this case, the color matching is based on the Euclidean distance among vectors. The partial dissimilarity index based on the chromatic components is therefore computed as follows</p><formula xml:id="formula_13">dCr (x, z) = [µ Cr (x) − µ Cr (z)] 2 + [σ Cr (x) − σ Cr (z)] 2 + [t Cr (x) − t Cr (z)]<label>2 1 / 2 (9) dCb</label></formula><formula xml:id="formula_14">(x, z) = [µ Cb (x) − µ Cb (z)] 2 + [σ Cb (x) − σ Cb (z)] 2 + [t Cb (x) − t Cb (z)] 2 1 / 2<label>(10)</label></formula><p>The overall dissimilarity Dtot between two textures is finally computed as follows:</p><formula xml:id="formula_15">Dtot = dY (x, z) + α dCr (x, z) 2 + dCb (x, z) 2 1/2<label>(11)</label></formula><p>where α is the factor is used to select the relative importance between structural and chromatic components.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Clustering algorithm</head><p>The clustering algorithm proposed in the follow aims to realize a data ranking starting from a query-by-example image where the images are grouped together in clusters on the basis of a similarity index, while different relevant clusters are preserved and displayed to the user, thus representing at the same time similarity and diversity between the data set. The algorithm consists of four main steps:</p><p>1. In the first step the images in the database are ranked according to the similarity to the query image, on the base of Eq. 11. Both luminance and color components are used in this phase. The first 32 images are considered relevant and they will be used in the following steps (see table <ref type="table">4</ref>). 2. A new ranking is performed by computing the similarity between the 32 images previously selected with the image ranked the last position of the previous step. The comparison is made by considering both luminance and color components. A new cluster is created with the images presenting a distance value, with respect to the query image, lower than a fixed threshold. These images are removed from the initial cluster. This step is repeated until the original cluster is empty. For each cluster a feature vector is computed by averaging the features of the images belonging to it. 3. The operations performed on the images in the previous step are repeated on the clusters, by operating on theirs features vectors. This step is iterated until at least one new cluster is created. If the algorithm does not create any new cluster, the threshold is increased and the step is repeated again. This step ends if the threshold becomes higher than a fixed maximum value or the number of cluster is less than a minimum value (6 in our experiments). 4. By rearranging the clusters created at the previous step considering the number of elements it is possible to obtain the final classification, table <ref type="table">4</ref>.</p><p>The image which represents the cluster can be the first one in the cluster or the image with the most similar representative vector to the representative vector of the cluster.</p><p>In our experiments, the starting value of the threshold in the step 3 is the same of the threshold used in the step 2. The optimal value is automatically determined during the process by histogram evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Experimental results</head><p>The performance of the proposed method are evaluated by using a database composed by 640 images. In the database there are 40 classes of images, each class is composed by 16 images. We analyze the results obtained using each image in the database as query image. The experimental results are shown in table 5. To a better understanding of table <ref type="table">5</ref> it is important to clarify the meaning of outsider and impure cluster. An outsider is an image that the algorithm puts in a wrong cluster, instead an impure cluster is a cluster which has at least one outsider. The results show that the algorithm creates a right number of cluster, indeed the average number of clusters created is 5.46 and the true average number of clusters is 4.67, furthermore the percentage of impure clusters is very low, only 9.12%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusions</head><p>In this contribution a novel method for a fair organization of the results of a query-by-example retrieval system is presented. The system is able to show, </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Ranking of the first 32 images starting from the query image Bark.0000.00 of the "Vision Texture" database used for the tests.</figDesc><table><row><cell cols="2">Ranking</cell><cell>Image</cell><cell>Ranking</cell><cell>Image</cell></row><row><cell>1</cell><cell></cell><cell>Bark.0000.00</cell><cell>17</cell><cell>Bark.0000.02</cell></row><row><cell>2</cell><cell></cell><cell>Bark.0000.08</cell><cell>18</cell><cell>Brick.0005.14</cell></row><row><cell>3</cell><cell></cell><cell>Bark.0000.09</cell><cell>19</cell><cell>Bark.0000.11</cell></row><row><cell>4</cell><cell></cell><cell>Bark.0000.05</cell><cell>20</cell><cell>Wood.0001.01</cell></row><row><cell>5</cell><cell></cell><cell>Bark.0000.12</cell><cell>21</cell><cell>Wood.0001.05</cell></row><row><cell>6</cell><cell></cell><cell>Bark.0000.01</cell><cell>22</cell><cell>Flowers.0005.13</cell></row><row><cell>7</cell><cell></cell><cell>Bark.0000.04</cell><cell>23</cell><cell>Brick.0005.02</cell></row><row><cell>8</cell><cell></cell><cell>Bark.0000.13</cell><cell>24</cell><cell>Leaves.00016.15</cell></row><row><cell>9</cell><cell></cell><cell>Bark.0000.06</cell><cell>25</cell><cell>Flowers.0005.12</cell></row><row><cell>10</cell><cell></cell><cell>Wood.0001.15</cell><cell>25</cell><cell>Wood.0001.09</cell></row><row><cell>11</cell><cell></cell><cell>Bark.0000.10</cell><cell>27</cell><cell>Leaves.0016.09</cell></row><row><cell>12</cell><cell></cell><cell>Wood.0001.13</cell><cell>28</cell><cell>Brick.0005.01</cell></row><row><cell>13</cell><cell></cell><cell>Wood.0001.02</cell><cell>29</cell><cell>Flowers.0005.09</cell></row><row><cell>14</cell><cell></cell><cell>Bark.0000.07</cell><cell>30</cell><cell>Leaves.00016.14</cell></row><row><cell>15</cell><cell></cell><cell>Wood.0001.12</cell><cell>31</cell><cell>Leaves.00016.13</cell></row><row><cell>16</cell><cell></cell><cell>Wood.0001.08</cell><cell>32</cell><cell>Brick.0005.00</cell></row><row><cell cols="5">Ranking Representant of cluster Ranking Representant of cluster</cell></row><row><cell>1</cell><cell cols="2">Bark.0000.05</cell><cell>5</cell><cell>Leaves.0016.13</cell></row><row><cell>2</cell><cell cols="2">Wood.0001.01</cell><cell>6</cell><cell>Flowers.0005.09</cell></row><row><cell>3</cell><cell cols="2">Bark.0000.11</cell><cell>7</cell><cell>Wood.0001.13</cell></row><row><cell>4</cell><cell cols="2">Brick.0005.14</cell><cell>8</cell><cell>Wood.0001.12</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Result of the clustering algorithm obtained by using the image Bark.0000.00.</figDesc><table><row><cell>Total number of cluster created</cell><cell>3496</cell></row><row><cell>Total number of impure cluster created</cell><cell>319</cell></row><row><cell>Total number of outsiders</cell><cell>974</cell></row><row><cell>Average percentage of impure cluster</cell><cell>9.12%</cell></row><row><cell cols="2">Average of outsiders in each impure cluster 3.05</cell></row><row><cell>Average number of cluster created</cell><cell>5.46</cell></row><row><cell>True average number of cluster</cell><cell>4.67</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 .</head><label>3</label><figDesc>Performance results.</figDesc><table /></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>together with the most similar match, also some less similar image cluster. In this way, not only objects very similar to the query are shown to the user. On the contrary, objects that slightly differ from the target image, and that are less numerous, are hardly ever shown to the user. The proposed method increases the chances that all the relevant clusters are represented in the list displayed to the user, thus preserving the diversity of possible solutions.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A Framework of Web Image Search Engine</title>
		<author>
			<persName><forename type="first">Weiguang</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yafei</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianjiang</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ran</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhenghui</forename><surname>Xie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int.Joint Conf. on Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2009">2009. 2009</date>
			<biblScope unit="page" from="522" to="525" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Garment Image Retrieval on the Web with Ubiquitous Camera-Phone</title>
		<author>
			<persName><forename type="first">Ruhan</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kaiming</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Naixue</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yong</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Asia-Pacific Services Computing Conf</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1584" to="1589" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Color Descriptors for Web Image Retrieval: A Comparative Study</title>
		<author>
			<persName><forename type="first">Ottavio</forename><surname>Augusto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bizetto</forename><surname>Penatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ricardo</forename><surname>Da</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Silva</forename><surname>Torres</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">XXI Brazilian Symp. on Computer Graphics and Image Processing</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="163" to="170" />
		</imprint>
	</monogr>
	<note>SIBGRAPI</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A Web Image Retrieval Re-ranking Scheme with Cross-Modal Association Rules</title>
		<author>
			<persName><forename type="first">Yong</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Naixue</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jong</forename><forename type="middle">Hyuk</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ruhan</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Symp. on Ubiquitous Multimedia Computing</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="83" to="86" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Unified System for Web Personal Image Retrieval</title>
		<author>
			<persName><forename type="first">Lin</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yao</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhenfeng</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Conf. on Intelligent Information Hiding and Multimedia Signal Processing</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="787" to="790" />
		</imprint>
	</monogr>
	<note>IIH-MSP</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Cortina: a system for largescale, content-based web image retrieval</title>
		<author>
			<persName><forename type="first">T</forename><surname>Quack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Mönich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Thiele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">S</forename><surname>Manjunath</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th annual ACM Int. Conf. on Multimedia</title>
				<meeting>the 12th annual ACM Int. Conf. on Multimedia<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">04</biblScope>
			<biblScope unit="page" from="508" to="511" />
		</imprint>
	</monogr>
	<note>MULTIMEDIA &apos;</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Comparative evaluation of Web image search engines for multimedia applications</title>
		<author>
			<persName><forename type="first">K</forename><surname>Stevenson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICME, IEEE Int. Conf. on Multimedia and Expo</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Wavelet-Based Texture Retrieval Using Generalized Gaussian Density and Kullback-Leibler Distance</title>
		<author>
			<persName><forename type="first">Minh</forename><forename type="middle">N</forename><surname>Do</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Vetterli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Image Processing</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2002-02">February 2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Brief Descriptions of Visual Features for Baseline TRECVID Concept Detectors</title>
		<author>
			<persName><forename type="first">Akira</forename><surname>Yanagawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Winston</forename><surname>Hsu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shih-Fu</forename><surname>Chang</surname></persName>
		</author>
		<idno>219- 2006-5</idno>
		<imprint>
			<date type="published" when="2006-07">July 2006</date>
		</imprint>
		<respStmt>
			<orgName>Columbia University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">ADVENT Technical Report</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Maximum Likelhood Localization of 2-D Patterns in the Gauss-Laguerre Transform Doman: Theoretic Framework and Preliminary Results</title>
		<author>
			<persName><forename type="first">A</forename><surname>Neri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jacovitti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Image Processing</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2004-01">January 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Multiresolution Circular Harmonic Decomposition</title>
		<author>
			<persName><forename type="first">G</forename><surname>Jacovitti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Trans. on Image Processing</title>
				<imprint>
			<date type="published" when="2000-11">November 2000</date>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="3242" to="3247" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Circular Harmonic Phase Filters for Efficient Rotation -Invariant Pattern Recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rosen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shamir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Optics</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">14</biblScope>
			<biblScope unit="page" from="2895" to="2899" />
			<date type="published" when="1988-07">July 1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Generalized Maximum Likelihood Test for Rotation Invariant Pattern Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Carli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jacovitti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SPIE Conf. Photonics East</title>
				<meeting><address><addrLine>Boston</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000-11">November 2000</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
