<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Image Hunter at ImageCLEF 2012 Personal Photo Retrieval Task</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roberto Tronci</string-name>
          <email>roberto.tronci@diee.unica.it</email>
          <email>roberto.tronci@sardegnaricerche.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Piras</string-name>
          <email>luca.piras@diee.unica.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriele Murgia</string-name>
          <email>gabriele.murgia@sardegnaricerche.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgio Giacinto</string-name>
          <email>giacinto@diee.unica.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AmILAB - Laboratorio Intelligenza d'Ambiente</institution>
          ,
          <addr-line>Sardegna Ricerche</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DIEE - Department of Electric and Electronic Engineering University of Cagliari</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2012</year>
      </pub-date>
      <abstract>
        <p>This paper presents the participation of the Pattern Recognition and Application Group (PRA Group) and the Ambient Intelligence (AmILAB) in the ImageCLEF 2012 Personal Photo Retrieval Pilot Task. This is a pilot task that aims to provide a test bed for QBE-based retrieval scenarios in the scope of personal information retrieval based on a collection of 5,555 personal images plus rich meta-data. For this challenge we used Image Hunter, a content based image retrieval tool with relevance feedback previously developed by ourselves. The results show that we obtained good results by taking into account that we used only visual data, moreover we were the only one that used relevance feedback.</p>
      </abstract>
      <kwd-group>
        <kwd>photo retrieval</kwd>
        <kwd>content based image retrieval</kwd>
        <kwd>relevance feedback</kwd>
        <kwd>SVM</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The personal photo retrieval is a pilot task introduced in the ImageCLEF 2012
competition. This pilot task provides a test-bed for query by example (QBE)
image retrieval scenarios in the scope of personal information retrieval. In fact,
instead of using images downloaded from Flickr or other similar web resources,
the dataset proposed re ects an amalgamated personal image collection that has
been taken by 19 photographers. The aim of this pilot task is to create an image
retrieval scenario where a normal person (i.e., not expert in image retrieval tasks)
searches in its personal photo collections some \relevant" images, i.e. the search
for similar images or images depicting a similar event, e.g. a rock concert. This
pilot task is divided into two tasks: retrieval of visual concepts and retrieval of
events. A detailed overview of the dataset and the task can be found in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        We took part only to the Task 1 of this competition. In this task we have a
set of visual concepts with ve QBE associated to be used in the retrieval process
to retrieve relevant images. To perform the task we used Image Hunter, a content
based image retrieval tool with relevance feedback previously developed at the
AmILAB [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>Image Hunter: a brief description</title>
      <p>With the aim of building a practical application to show the potentialities of
Content Based Image Retrieval tools with Relevance Feedback, we developed
Image Hunter. Image Hunter is a prototype that shows the capabilities of an
Image Retrieval engine where the search is started by an image provided by
the user, and the system returns the most visually similar images. The system
is enriched by Relevance Feedback capabilities that let the user specify which
results match the desired concepts through an easy user interface.
2.1</p>
      <p>
        Architectural description
This tool is entirely written in JAVA, so that the tool is machine independent.
For its development, we partially took inspiration from the LIRE library [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
(that is just a feature extraction library). In addition, we chose Apache Lucene
for building the index of the extracted data.
      </p>
      <p>The main core of Image Hunter is a full independent module, thus allowing
the development of a personalized user interface. A schema of the whole system
can be seen in Figure 1.</p>
      <p>The core of the system is subdivided into three main parts:
{ Indexing and Lucene interface for data storing;
{ Feature extraction interface;
{ Image Retrieval and Relevance Feedback.</p>
      <p>The Indexing part has the role of extracting the visual features and other
informations from the images. The visual features and other descriptors of the
images are then stored in a particular structure de ned inside Image Hunter.
The tool can index di erent types of image formats and it can be built in an
incremental way. Lucene turned out to be well suited for the storage needs of
Image Hunter. The core of Lucene's logical architecture is a series of document
containing text elds, where we have associated di erent features ( elds) to each
image (document).</p>
      <p>
        As we said before, for the feature extraction we took inspiration from the
LIRE library that it is used as an external feature extraction library. We
expanded and modi ed its functionalities by implementing or reimplementing in
Image Hunter some extractors. The Feature extraction interface allows to
extract di erent visual features based on di erent characteristics: color, texture
and shape. They are:
{ Scalable Color [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], a color histogram extracted from the HSV color space;
{ Color Layout [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], that characterizes the spatial distribution of colors;
{ RGB-Histogram and HSV-Histogram [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], based on RGB and HSV
components of the image respectively;
{ Fuzzy Color [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], that considers the color similarity between the pixel of the
image;
{ JPEG Histogram [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a JPEG coe cient histogram;
{ Edge Histogram [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], that captures the spatial distribution of edges;
{ Tamura [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], that captures di erent characteristic of the images like
coarseness, contrast, directionality, regularity, roughness;
{ Gabor [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] that allows the edge detection;
{ CEDD (Color and Edge Directivity Descriptor) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ];
{ FCTH (Fuzzy Color and Texture Histogram) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>One of Image Hunter's greatest strengths is its exibility: in fact, its structure
was built in a way that it is possible to add any other image descriptor. The
choice of the above mentioned set is due to the \real time" nature of the system
with large database. In fact even if some local features such as SIFT or SURF
could improve the retrieval performance for some particular kind of searches, on
the other hand they are more time expensive in the evaluation of the similarity
between images.</p>
      <p>
        The core adopts three relevance feedback techniques [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Two of them are
based on the nearest-neighbor paradigm (NN), while one of them is based on
Support Vector Machines (SVM). The use of the nearest-neighbor paradigm has
been driven by its use in a number of di erent pattern recognition elds, where it
is di cult to produce a high-level generalization of a class of objects, but where
neighborhood information is available [
        <xref ref-type="bibr" rid="ref1 ref8">1, 8</xref>
        ]. In particular, nearest-neighbor
approaches have proven to be e ective in outliers detection, and one-class classi
cation tasks [
        <xref ref-type="bibr" rid="ref13 ref2">2, 13</xref>
        ]. Support Vector Machines are used because they are one of
the most popular learning algorithm when dealing with high dimensional spaces
as in the case of CBIR [
        <xref ref-type="bibr" rid="ref14 ref6">6, 14</xref>
        ].
      </p>
      <p>The user interface is structured to provide just the functionalities that are
strictly related with the user interaction (e.g., the list of relevant images found
by the user).</p>
      <p>Image Hunter employs a web-based interface that can be viewed and
experienced at the address http://prag.diee.unica.it/amilab/WIH. This version is
a web application built for the Apache Tomcat web container by using a mixture
of JSP and java Servlet. The graphic interface is based on the jQuery framework,
and has been tested for the Mozilla Firefox and Google Chrome browsers. The
Image Hunter homepage let the user choose the picture from which starting the
search. The picture can be chosen either within those of the proposed galleries or
among the images from the user hard disk. In order to make intuitive and easy
the features o ered by the application, the graphical interface has been designed
relying on the Drag and Drop approach. From the result page, the user can drag
the images that her deems relevant to her search in a special box-cart, and then
submit the feedback. Then the feedback is processed by the system, and a new
set of images is proposed to the user. The user can iterate the feedback process
as many times he/she wants. Figure 2 summarizes the typical user interaction
within Image Hunter.</p>
      <p>Example-Image chosen by a
user to perform a search in
a Digital Library</p>
      <p>Compute the
similarity between
the query imageand
the images in the
database</p>
      <p>After 3
Compute the iteractions
new query
using the
user's hints</p>
      <p>Relevant images
found at the
previous steps
relevant
not-relevant
The system outputs the results</p>
      <p>
        The user drags the relevant images into the
relevant's box for the relevance feedback
In this section we brie y describe the two relevance feedback techniques
implemented in the core that we have used in this competition. The use of the
nearest-neighbor paradigm is motivated by its use in a number of di erent
pattern recognition elds, where it is di cult to produce a high-level generalization
of a class of objects, but where neighborhood information is available [
        <xref ref-type="bibr" rid="ref1 ref8">1, 8</xref>
        ]. In
particular, nearest-neighbor approaches have proven to be e ective in outliers
detection, and one-class classi cation tasks [
        <xref ref-type="bibr" rid="ref13 ref2">2, 13</xref>
        ]. Support Vector Machines are
used because they are one of the most popular learning algorithm when dealing
with high dimensional spaces as in CBIR [
        <xref ref-type="bibr" rid="ref14 ref6">6, 14</xref>
        ].
k-NN Relevance Feedback In this work we resort to a technique proposed by
some of the authors in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] where a score is assigned to each image of a database
according to its distance from the nearest image belonging to the target class,
and the distance from the nearest image belonging to a di erent class. This
score is further combined to a score related to the distance of the image from
the region of relevant images. The combined score is computed as follows:
(1)
(2)
(3)
(5)
rel(I) =
      </p>
      <p>n=t
1 + n=t
relBQS (I) +</p>
      <p>relNN (I)
1
1 + n=t
where n and t are the number of non-relevant images and the whole number
of images retrieved after the latter iteration, respectively. The two terms relNN
and relBQS are computed as follows:
relNN (I) =
kI</p>
      <p>kI
N N r (I)k + kI</p>
      <p>N N nr (I)k</p>
      <p>N N nr (I)k
where N N r(I) and N N nr(I) denote the relevant and the non relevant Nearest
Neighbor of I, respectively, and k k is the metric de ned in the feature space at
hand,
relBQS (I) =
1
e
1 dBQS (I) max dBQS (Ii)</p>
      <p>
        i
1
e
where e is the Euler's number, i is the index of all images in the database and
dBQS is the distance of image I from a reference vector computed according to
the Bayes decision theory (Bayes Query Shifting, BQS) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The aim of BQS
approach is to \move" the query along the visual spaces by taking into account
images marked as relevant and not-relevant within the visual concept searched
to look for new images. The BQS query is computed as follows:
QBQS = mR +
kmR
mN k
1
kR
      </p>
      <p>kN
maxfkR; kN g
(mR
mN )
(4)
where mR and mN are the mean vectors of relevant and not-relevant images
respectively, is the standard deviation of the images belonging to the
neighborhood of the original query and kR and kN are the number of relevant and
not-relevant images, respectively.</p>
      <p>If we are using F feature spaces, we have di erent scores rel(I) for each f
feature space. Thus the following combination is performed to obtain a \single"
score:
rel(I) =</p>
      <p>F
X wf relf (I)
where the wf is the weight associated to the f -space.</p>
      <p>wf =</p>
      <sec id="sec-2-1">
        <title>X dmin(Ii; R)</title>
        <p>f
i2R</p>
      </sec>
      <sec id="sec-2-2">
        <title>X dmin(Ii; R) +</title>
        <p>f
i2R</p>
      </sec>
      <sec id="sec-2-3">
        <title>X dmin(Ii; N )</title>
        <p>f
i2R
SVM based Relevance Feedback Support Vector Machines are used to nd
a decision boundary in each feature space f 2 F . The SVM is very handy
for this kind of task because, in the case of image retrieval, we deal with high
dimensional feature spaces and two \classes" (i.e. relevant and not-relevant). For
each feature space f , a SVM is trained using the feedback given by the user. The
results of the SVMs in terms of distances from the hyperplane of separation are
then combined into to a relevance score through the Mean rule as follows
relSV M (I) =
1 F</p>
      </sec>
      <sec id="sec-2-4">
        <title>X relSfV M (I)</title>
        <p>F f=1
3</p>
        <p>
          Image Hunter at ImageCLEF
For the participation at the ImageCLEF competition we mainly used Image
Hunter as it is. This means that as visual features we have used only those
listed in the previous section, that are partially part of those provide for the
competition [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>We took part only to the task 1 (retrieval of visual concepts). In the task
different visual concepts are provided and to each concept ve QBE are associated.
However Image Hunter is designed to trigger the image retrieval process only
with one QBE. Thus, instead of performing ve di erent runs for each concept
starting with a di erent QBE and averaging the results, we slightly modi ed
Image Hunter at the rst interaction step (i.e., the rst content based image
retrieval before the relevance feedback steps) to take into account the ve QBE.
In this case we adopted two di erent techniques: the \mean" of the QBEs and
a mixed multi query approach. In the rst case, the query used to trigger Image
Hunter is the mean vector of the ve QBE in each visual feature space:</p>
        <p>Qmean = mQBE
that is the case of BQS presented in Equation (4) when there are only relevant
images. In the second case we performed one content based image retrieval for
each QBE, and then we mixed the results by assigning to each retrieved image
the minimum distance from the ve query images. After this rst automatic
interaction, the tool is ready to interact with real users by using one of the
relevance feedback methodologies above.</p>
        <p>The interaction with real users was performed by 10 di erent people. The
only constraint that we gave to each person was in the minimum number of
interactions (i.e., 3), but letting them free to choose when stop the retrieval
process. In Figure 3 some snapshots of the tool in action are reported.
(6)
(7)
(8)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results and Discussion</title>
      <p>We submitted four runs to the competition combining the two methodologies for
the rst step and the relevance feedback methods used: Qmean + kNN (Run11
in tables), multi query + kNN (Run12 ), Qmean + SVM (Run31 ), multi query
+ SVM (Run32 ). Unfortunately, Run31 was not evaluated due some duplicates
in the nal le. In each run 100 retrieved documents are reported. In our case
they are sorted with the relevance score obtained at the last step, these means
that in some cases the rst entry is not the rst relevant image retrieved. This
fact, derives from the methodologies used for relevance feedback. In particular
the kNN, by means of the BQS, \moves" the query in the visual spaces, and
with respect to the last BQS query the rst relevant images retrieved could be
far than other images.</p>
      <p>In Table 1 the methodologies used by the competitors in the competition are
presented. Among all the competitors we were the only one that used only the
visual features for all the runs, and the only one that used a real user interaction
relevance feedback.</p>
      <p>In Table 2 the precision after N docs retrieved is reported. REGIM obtained
the bet results until 20 retrieved documents, after the best results are obtained
by the run IBMA0 of KIDS. Instead, our results are generally good if we take
into account the kNN retrieval (i.e., Run11 and Run12 ), and are mostly better
than the only other method based only to visual features.</p>
      <p>In Table 3 normalized discounted cumulative gain and mean average precision
after N docs retrieved is reported. For these measures our run Run11 obtained
better results than those from REGIM (that were the best in the previous table),
and we can claim that it is the best second run if we look at the overall of the
measures presented in this table.</p>
      <p>Group Run ID P5 P10 P15 P20 P30 P100
KIDS IBMA0 0,8333 0,7833 0,7222 0,6896 0,6347 0,4379
KIDS OBOA0 0,8000 0,7292 0,6667 0,6354 0,6083 0,4117
KIDS IOMA0 0,7667 0,6583 0,6222 0,6104 0,5639 0,3925
KIDS OBMA0 0,6500 0,6500 0,6083 0,5771 0,5611 0,3925
REGIM run4 0,9000 0,8375 0,7917 0,7333 0,6292 0,3992
REGIM run2 0,9000 0,8417 0,7917 0,7292 0,6278 0,3975
REGIM run1 0,9000 0,8417 0,7889 0,7292 0,6278 0,3967
REGIM run5 0,9000 0,8458 0,7889 0,7292 0,6278 0,3971
REGIM run3 0,9000 0,8458 0,7889 0,7292 0,6278 0,3975
Image Hunter - Lpiras Run12 0,7917 0,7667 0,7361 0,6938 0,6083 0,3417
KIDS IOOA4 0,6750 0,6125 0,5778 0,5354 0,4486 0,3054
Image Hunter - Lpiras Run11 0,8000 0,7083 0,6222 0,5646 0,4903 0,2825
Image Hunter - Lpiras Run32 0,6583 0,5667 0,4667 0,3958 0,2972 0,1425</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In our participation to the personal photo retrieval pilot task of ImageCLEF, we
tested the e ciency of our previous tool Image Hunter. As our intention was to
benchmark this tool on this task, we did not make any modi cation. The only
modi cation made was about the use of ve QBE instead of one. The results
obtained are encouraging, especially if we think that the results were obtained
using only visual features. Future improvements of the tool will focus on the
use of combination of the meta-data features with the visual ones, and on the
improvement of our ranking system that is not actually designed for scienti c
evaluation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aha</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kibler</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albert</surname>
            ,
            <given-names>M.K.</given-names>
          </string-name>
          :
          <article-title>Instance-based learning algorithms</article-title>
          .
          <source>Machine Learning</source>
          <volume>6</volume>
          (
          <issue>1</issue>
          ),
          <volume>37</volume>
          {
          <fpage>66</fpage>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Breunig</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kriegel</surname>
            ,
            <given-names>H.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
          </string-name>
          , R.T.,
          <string-name>
            <surname>Sander</surname>
            ,
            <given-names>J.: LOF</given-names>
          </string-name>
          :
          <article-title>Identifying density-based local outliers</article-title>
          . In: Chen,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Naughton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.F.</given-names>
            ,
            <surname>Bernstein</surname>
          </string-name>
          , P.A. (eds.) SIGMOD Conference. pp.
          <volume>93</volume>
          {
          <fpage>104</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <issue>3</issue>
          .
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>S.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sikora</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Overview of the mpeg-7 standard</article-title>
          .
          <source>IEEE Trans. Circuits Syst. Video Techn.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Chatzichristo s,
          <string-name>
            <given-names>S.A.</given-names>
            ,
            <surname>Boutalis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.S.:</surname>
          </string-name>
          <article-title>Cedd: Color and edge directivity descriptor: A compact descriptor for image indexing and retrieval</article-title>
          . In: Gasteratos,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Vincze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Tsotsos</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.K</surname>
          </string-name>
          . (eds.)
          <source>ICVS. Lecture Notes in Computer Science</source>
          , vol.
          <volume>5008</volume>
          , pp.
          <volume>312</volume>
          {
          <fpage>322</fpage>
          . Springer (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Chatzichristo s,
          <string-name>
            <given-names>S.A.</given-names>
            ,
            <surname>Boutalis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.S.</surname>
          </string-name>
          : Fcth:
          <article-title>Fuzzy color and texture histogram - a low level feature for accurate image retrieval</article-title>
          .
          <source>In: Image Analysis for Multimedia Interactive Services</source>
          . pp.
          <volume>191</volume>
          {
          <fpage>196</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Cristianini</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shawe-Taylor</surname>
          </string-name>
          , J.:
          <article-title>An Introduction to Support Vector Machines and Other Kernel-based Learning Methods</article-title>
          . Cambridge University Press (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Deselaers</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keysers</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ney</surname>
          </string-name>
          , H.:
          <article-title>Features for image retrieval: an experimental comparison</article-title>
          .
          <source>Inf. Retr</source>
          .
          <volume>11</volume>
          (
          <issue>2</issue>
          ),
          <volume>77</volume>
          {
          <fpage>107</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Duda</surname>
            ,
            <given-names>R.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hart</surname>
            ,
            <given-names>P.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stork</surname>
            ,
            <given-names>D.G.</given-names>
          </string-name>
          :
          <article-title>Pattern Classi cation</article-title>
          . John Wiley and Sons, Inc., New York (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Giacinto</surname>
          </string-name>
          , G.:
          <article-title>A nearest-neighbor approach to relevance feedback in content based image retrieval</article-title>
          .
          <source>In: CIVR '07: Proceedings of the 6th ACM international conference on Image and video retrieval</source>
          . pp.
          <volume>456</volume>
          {
          <fpage>463</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Giacinto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roli</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Bayesian relevance feedback for content-based image retrieval</article-title>
          .
          <source>Pattern Recognition</source>
          <volume>37</volume>
          (
          <issue>7</issue>
          ),
          <volume>1499</volume>
          {
          <fpage>1508</fpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chatzichristo s</surname>
          </string-name>
          , S.A.:
          <article-title>Lire: lucene image retrieval: an extensible java cbir library</article-title>
          .
          <source>In: MM '08: Proceeding of the 16th ACM international conference on Multimedia</source>
          . pp.
          <volume>1085</volume>
          {
          <fpage>1088</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Tamura</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mori</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yamawaki</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Textural features corresponding to visual perception</article-title>
          .
          <source>IEEE Trans. Systems, Man and Cybernetics</source>
          <volume>8</volume>
          (
          <issue>6</issue>
          ),
          <volume>460</volume>
          {473 (
          <year>June 1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Tax</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          :
          <article-title>One-class classi cation</article-title>
          .
          <source>Ph.D. thesis</source>
          , Delft University of Technology, Delft, The Netherlands (
          <year>June 2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Tong</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Support vector machine active learning for image retrieval</article-title>
          .
          <source>In: Proc. of the 9th ACM Intl Conf. on Multimedia</source>
          . pp.
          <volume>107</volume>
          {
          <issue>118</issue>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Tronci</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murgia</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pili</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacinto</surname>
          </string-name>
          , G.:
          <article-title>Imagehunter: a novel tool for relevance feedback in content based image retrieval</article-title>
          .
          <source>In: 5th Int. Workshop on New Challenges in Distributed Information Filtering</source>
          and
          <string-name>
            <surname>Retrieval - DART</surname>
          </string-name>
          <year>2011</year>
          .
          <string-name>
            <surname>Ceur</surname>
          </string-name>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. Zellhofer, D.:
          <article-title>Overview of the personal photo retrieval pilot task at imageclef 2012</article-title>
          .
          <source>Tech. rep., CLEF 2012 working notes</source>
          , Rome, Italy, Rome, Italy (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>