<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multi-modal relevance feedback for medical image retrieval</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dimitrios Markonis Roger Schaer</string-name>
          <email>dimitrios.markonis@hevs.ch</email>
          <email>dimitrios.markonis@hevs.ch roger.schaer@hevs.ch</email>
          <email>roger.schaer@hevs.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henning Müller</string-name>
          <email>henning.mueller@hevs.ch</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>HES-SO HES-SO</institution>
          ,
          <addr-line>TechnoPole 3 TechnoPole 3, Sierre, Switzerland Sierre</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>HES-SO</institution>
          ,
          <addr-line>TechnoPole 3, Sierre</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <fpage>20</fpage>
      <lpage>23</lpage>
      <abstract>
        <p>Medical image retrieval can assist physicians in finding information supporting their diagnosis. Systems that allow searching for medical images need to provide tools for quick and easy navigation and query refinement as the time for information search is often short. Relevance feedback is a powerful tool in information retrieval. This study evaluates relevance feedback techniques with regard to the content they use. A novel relevance feedback technique that uses both text and visual information of the results is proposed. Results show the potential of relevance feedback techniques in medical image retrieval and the superiority of the proposed algorithm over commonly used approaches. Future steps include integrating semantics into relevance feedback techniques to benefit of the structured knowledge of ontologies and experimenting on the fusion of text and visual information.</p>
      </abstract>
      <kwd-group>
        <kwd>relevance feedback</kwd>
        <kwd>content-based image retrieval</kwd>
        <kwd>medical image retrieval</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>Searching for images is a daily task for many medical
professionals, especially in image–oriented fields such as
radiology. However, the huge amount of visual data in hospitals
and the medical literature is not always easily accessible and
physicians have generally little time for information search
as they are charged with many tasks.</p>
      <p>
        Therefore, medical image retrieval systems need to return
information adjusted to the knowledge level and expertise of
the user in a quick and precise fashion. A well known
technique trying to improve search results by user interaction is
relevance feedback [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Relevance feedback allows the user
to mark results returned in a previous search step as relevant
or irrelevant to refine the initial query. The concept behind
relevance feedback is that though user may have difficulties
in formulating a precise query for a specific task, they
generally see quickly whether a returned result is relevant to the
information need or not. This technique found use in
image retrieval particularly with the emerge of content–based
image retrieval (CBIR) systems [
        <xref ref-type="bibr" rid="ref18 ref19 ref20">18, 19, 20</xref>
        ]. Following the
CBIR mentality, the visual content of the marked results is
used to refine the initial image query. With the result
images represented as a grid of thumbnails, relevance feedback
can be applied quickly to speed up the search iterations and
refine results. Recent user–tests with radiologists on a
medical image search system also showed that this method is
intuitive and straightforward to learn [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Depending on whether the user manually provides the
feedback to the system (e.g. by marking results) or the
system obtains this information automatically (e.g. by log
analysis) relevance feedback can be categorized as explicit or
implicit. Moreover, the information obtained by relevance
feedback can be used to affect the general behaviour of the
system (long–term learning). In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] a market basket
analysis algorithm is applied in image retrieval of long–term
learning. A recent review of short–term and long–term learning
relevance feedback techniques in CBIR can be found in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
An extensive survey of relevance feedback in text–based
retrieval systems is presented in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and for CBIR in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        In the medical informatics field, [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] applies CBIR with
relevance feedback on mammography retrieval. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], an
image retrieval framework using relevance feedback is
evaluated on a dataset of 5000 medical images that uses support
vector machines to compute the refined queries.
      </p>
      <p>In this paper we evaluate different explicit, short–term
relevance feedback techniques using visual content or text
for medical image retrieval. We propose a technique that
combines visual and text–based relevance feedback and show
that it achieves a competitive performance to the state–of–
the–art approaches.
2.
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>METHODS</title>
    </sec>
    <sec id="sec-3">
      <title>Rocchio algorithm</title>
      <p>|Dr|
dj ∈Dr
dj − γ</p>
      <p>1
|Dnr|
dj ∈Dnr
dj
(1)
where qm is the modified query,
qo is the original query,
Dr is the set of relevant images,
Dnr is the set of non–relevant images and
α, β and γ are weights.</p>
      <p>Typical values for the weights are α = 1, β = 0.8 and
γ = 0.2. Rocchio’s algorithm is typically used in vector
models and also for CBIR. Intuitively, the original query
vector is moved towards the relevant vectors and away from
the irrelevant ones. By giving a weight to the positive and
negative parts a problem of CBIR can be avoided that when
more negative than positive feedback exists that also many
relevant images disappear from the results set.
2.2</p>
    </sec>
    <sec id="sec-4">
      <title>Late fusion</title>
      <p>
        Another technique that showed potential in image retrieval [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
is late fusion. Late fusion [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is used in information retrieval
to combine result lists. It can be applied for fusing multiple
features, multiple queries and in multi–modal techniques.
The concept behind this method is to merge the result lists
into a single list while boosting common occurrences using
a fusion rule.
      </p>
      <p>
        For example, the fusion rule of the score–based late fusion
method CombMNZ [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is defined as:
      </p>
      <p>ScombMNZ(i) = F (i) ∗ ScombSUM(i)
where F (i) is the number of times an image i is present in
retrieved lists with a non–zero score, and S(i) is the score
assigned to image i. CombSUM is given by</p>
      <p>ScombSUM(i) =</p>
      <p>Nj
j=1</p>
      <p>Sj (i)
where Sj (i) is the score assigned to image i in retrieved list
j.
2.3</p>
    </sec>
    <sec id="sec-5">
      <title>Multi–modal relevance feedback</title>
      <p>Most of the techniques use vectors either from the text
or the visual models. However, it has been shown that
approaches that use both text and visual information can
outperform single–modal ones in image retrieval. We propose
the use of multi–modal information for relevance feedback
to enhance the retrieval performance. This is, to the extend
of our knowledge, the first time that such a technique is
proposed in image retrieval. As late fusion is applied on result
lists, it is straightforward to use for combining results from
visual and text queries.
2.4</p>
    </sec>
    <sec id="sec-6">
      <title>Experimental setup</title>
      <p>For evaluating the relevance feedback techniques the
following experimental setup was followed: The n search
iterations are initiated with a text query in iteration 0. The
relevant results from the top k results of iteration i were
used in the relevance feedback formula of the iteration i + 1
for i = 0...n− 2.</p>
      <p>
        The image dataset, topics and ground truth of
ImageCLEF 2012 medical image retrieval task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] were used in
this evaluation. The dataset contains more than 300’000
images from the medical open access literature.
      </p>
      <p>
        The image captions were accessed by the text–based runs
and indexed with the Lucene1 text search engine. Vector
space model was used along with tokenization, stopword
removal, stemming and Inverse document frequency-Term
frequency weighting. The Bag–of–visual–words model
described in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and the bag–of–colors model appearing in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
      </p>
    </sec>
    <sec id="sec-7">
      <title>DISCUSSION</title>
      <p>All of the evaluated techniques improve retrieval after the
initial search iteration. This demonstrates the potential of
relevance feedback for refining medical image search queries.</p>
      <p>Relevance feedback using only visual appearance models,
even though improving the retrieval performance after the
first iteration, performed worse than the text–based runs in
most cases. Visual features still suffer from the semantic
gap between the expressiveness of visual features and our
human interpretation. Still, this shows their usefulness in
image datasets where no or little text meta–data are
available. Moreover, when combined with the text–information
in the proposed method, they improve the text–only
baseline.</p>
      <p>The proposed multi–modal runs provide the best results
in all the cases except for case k = 5. Surprisingly, the
visual runs perform slightly better than the text and the
multi–modal approaches for this case. However, assuming
independent and normal distributed average precision
values the significance tests show that the difference is not
statistically significant.</p>
      <p>We consider the case k = 20 as the most realistic scenario
since users do not often inspect more than 2 pages of
results. Especially for grid–like result interface views, where
each page can contain 20 to 50 results, we consider k = 20
more realistic than k = 5. In this case the proposed
methods achieve the best performance with 0.2606 and 0.2635
respectively. Again, the significance tests do not find any
significance difference between the three best approaches.
However, applying different fusion rules for combining
visual and text information (such as linear–weighting) could
further improve the results of the mixed approaches.</p>
      <p>It can be noted that as the k increases, the performance
improvement also increases, highlighting the added value of
relevance feedback. Larger values of k were not explored as
this scenarios were judged as unrealistic.</p>
      <p>
        In the visual runs using Rocchio for combining the visual
queries is performing worse than late fusion. This comes in
accordance with the findings in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The reason behind this
could be that the large visual diversity of relevant images in
medicine and the curse of dimensionality cause the modified
vector to behave as an outlier in the high dimensional visual
feature space. In the mixed runs the difference between
the two methods is not statistically significant with Rocchio
performing slightly better than the late fusion.
      </p>
      <p>
        Irrelevant results were ignored, as they often have little or
no impact on the retrieval performance [
        <xref ref-type="bibr" rid="ref10 ref16">10, 16</xref>
        ]. More
importantly, the ground truth of the dataset used contains a
much larger portion of annotated irrelevant results than
relevant ones. This was considered to potentially simulate an
unrealistic scenario, as users do not usually mark many
results as negative examples. Having too many negative
examples could also cause the modified vector to follow an outlier
behaviour. Preliminary results confirmed this hypothesis,
where the use of negative results for relevance feedback can
decrease performance after the first iteration.
      </p>
      <p>
        It should be noted that this is an automated relevance
feedback experiment of positive only feedback and that in
selective relevance feedback situations the retrieval
performance is expected to perform even better. A larger number
of steps could be investigated but this might be unrealistic,
given the fact that physicians have little time and stop after
a few minutes of search [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Often users will only test a few
steps of relevance feedback at the most.
      </p>
    </sec>
    <sec id="sec-8">
      <title>CONCLUSIONS</title>
      <p>This paper proposes the use of multi–modal information
when applying relevance feedback to medical image retrieval.
An experiment was set up to simulate the relevance feedback
of a user on a number of medicine–related topics from
ImageCLEF 2012.</p>
      <p>In general, all the techniques evaluated in this study
improve the performance, which shows the added value of
relevance feedback. Text–based relevance feedback showed
consistently good results. Visual–based techniques showed
competitive performance for small shortlist sizes,
underperforming in the rest of the cases. The proposed multi–modal
approaches showed promising results slightly outperforming
the text–based one but without statistical significance.</p>
      <p>More fusion techniques are going to be evaluated in the
future. Comparison to manual query refinement by users is
considered in future plans, to assess relevance feedback as a
concept in medical image retrieval. The addition of semantic
search is also of interest, to take advantage of the structured
knowledge of the medical ontologies such as RadLex
(Radiology Lexicon) and MeSH (Medical Subject Headings).</p>
    </sec>
    <sec id="sec-9">
      <title>ACKNOWLEDGEMENTS</title>
      <p>This work was supported by the EU 7th Framework
Program in the context of the Khresmoi project (grant 257528).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>C.-C. Chen</surname>
            ,
            <given-names>P.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Huang</surname>
            , C.-
            <given-names>Y.</given-names>
            Gwo, Y.
          </string-name>
          <string-name>
            <surname>Li</surname>
            , and
            <given-names>C.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Wei</surname>
          </string-name>
          .
          <article-title>Mammogram retrieval: Image selection strategy of relevance feedback for locating similar lesions</article-title>
          .
          <source>International Journal of Digital Library Systems (IJDLS)</source>
          ,
          <volume>2</volume>
          (
          <issue>4</issue>
          ):
          <fpage>45</fpage>
          -
          <lpage>53</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Depeursinge</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. Fusion techniques for combining textual and visual information retrieval</article-title>
          . In H. Mu¨ller, P. Clough,
          <string-name>
            <given-names>T.</given-names>
            <surname>Deselaers</surname>
          </string-name>
          , and B. Caputo, editors,
          <source>ImageCLEF</source>
          , volume
          <volume>32</volume>
          of The Springer International Series On Information Retrieval, pages
          <fpage>95</fpage>
          -
          <lpage>114</lpage>
          . Springer Berlin Heidelberg,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Garc´ ıa Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Markonis</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Eggel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. The medGIFT group in ImageCLEFmed 2012</article-title>
          .
          <source>In Working Notes of CLEF</source>
          <year>2012</year>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Garc´ ıa Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Markonis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. Bag of colors for biomedical document image classification</article-title>
          . In H. Greenspan and
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>M u¨ller, editors, Medical Content-based Retrieval for Clinical Decision Support, MCBR-CDS 2012</article-title>
          , pages
          <fpage>110</fpage>
          -
          <lpage>121</lpage>
          . Lecture Notes in
          <source>Computer Sciences (LNCS)</source>
          , Oct.
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Garc´ ıa Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Markonis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schaer</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Eggel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. The medGIFT group in ImageCLEFmed 2013</article-title>
          . In Working Notes of CLEF 2013 (
          <article-title>Cross Language Evaluation Forum)</article-title>
          ,
          <year>September 2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Allinson</surname>
          </string-name>
          .
          <article-title>Relevance feedback in content-based image retrieval: a survey</article-title>
          .
          <source>In Handbook on Neural Information Processing</source>
          , pages
          <fpage>433</fpage>
          -
          <lpage>469</lpage>
          . Springer,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Markonis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baroz</surname>
          </string-name>
          , R. L. Ruiz de Castaneda,
          <string-name>
            <given-names>C.</given-names>
            <surname>Boyer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. User tests for assessing a medical image retrieval system: A pilot study</article-title>
          .
          <source>In MEDINFO</source>
          <year>2013</year>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Markonis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Holzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dungs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vargas</surname>
          </string-name>
          , G. Langs,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kriewel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller. A survey on visual information search behavior and requirements of radiologists</article-title>
          .
          <source>Methods of Information in Medicine</source>
          ,
          <volume>51</volume>
          (
          <issue>6</issue>
          ):
          <fpage>539</fpage>
          -
          <lpage>548</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller, A</article-title>
          . Garc´ ıa Seco de Herrera, J.
          <string-name>
            <surname>Kalpathy-Cramer</surname>
            ,
            <given-names>D. Demner</given-names>
          </string-name>
          <string-name>
            <surname>Fushman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Antani</surname>
            ,
            <given-names>and I. Eggel.</given-names>
          </string-name>
          <article-title>Overview of the ImageCLEF 2012 medical image retrieval and classification tasks</article-title>
          .
          <source>In Working Notes of CLEF</source>
          <year>2012</year>
          (
          <article-title>Cross Language Evaluation Forum)</article-title>
          ,
          <year>September 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          ¨ller, W. Mu¨ller,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Squire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchand-Maillet</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Pun</surname>
          </string-name>
          .
          <article-title>Strategies for positive and negative relevance feedback in image retrieval</article-title>
          .
          <source>Technical Report 00</source>
          .01, Computer Vision Group, Computing Centre, University of Geneva, rue G n ral Dufour,
          <volume>24</volume>
          , CH-1211 Gen ve, Switzerland, Jan.
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          ¨ller,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Squire</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Pun</surname>
          </string-name>
          .
          <article-title>Learning from user behavior in image retrieval: Application of the market basket analysis</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <volume>56</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>65</fpage>
          -
          <lpage>77</lpage>
          ,
          <year>2004</year>
          .
          <article-title>(Special Issue on Content-Based Image Retrieval)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. M. Rahman</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Bhattacharya</surname>
            , and
            <given-names>B. C.</given-names>
          </string-name>
          <string-name>
            <surname>Desai</surname>
          </string-name>
          .
          <article-title>A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback</article-title>
          .
          <source>Information Technology in Biomedicine, IEEE Transactions on</source>
          ,
          <volume>11</volume>
          (
          <issue>1</issue>
          ):
          <fpage>58</fpage>
          -
          <lpage>69</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Rocchio</surname>
          </string-name>
          .
          <article-title>Relevance feedback in information retrieval</article-title>
          .
          <source>In The SMART Retrieval System, Experiments in Automatic Document Processing</source>
          , pages
          <fpage>313</fpage>
          -
          <lpage>323</lpage>
          . Prentice Hall, Englewood Cliffs, New Jersey, USA,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Huang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Mehrotra</surname>
          </string-name>
          .
          <article-title>Relevance feedback techniques in interactive content-based image retrieval. In I. K. Sethi</article-title>
          and R. C. Jain, editors,
          <source>Storage and Retrieval for Image and Video Databases VI</source>
          , volume
          <volume>3312</volume>
          <source>of SPIEProc</source>
          , pages
          <fpage>25</fpage>
          -
          <lpage>36</lpage>
          , Dec.
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ruthven</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Lalmas</surname>
          </string-name>
          .
          <article-title>A survey on the use of relevance feedback for information access systems</article-title>
          .
          <source>The Knowledge Engineering Review</source>
          ,
          <volume>18</volume>
          (
          <issue>02</issue>
          ):
          <fpage>95</fpage>
          -
          <lpage>145</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>G.</given-names>
            <surname>Salton</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Buckley</surname>
          </string-name>
          .
          <article-title>Improving retrieval performance by relevance feedback</article-title>
          .
          <source>Readings in information retrieval</source>
          ,
          <volume>24</volume>
          :
          <fpage>5</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Shaw</surname>
          </string-name>
          and
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Fox</surname>
          </string-name>
          .
          <article-title>Combination of multiple searches</article-title>
          .
          <source>In TREC-2: The Second Text REtrieval Conference</source>
          , pages
          <fpage>243</fpage>
          -
          <lpage>252</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>D. M. Squire</surname>
            , W. Mu¨ller, H. Mu¨ller, and
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Pun</surname>
          </string-name>
          .
          <article-title>Content-based query of image databases: inspirations from text retrieval</article-title>
          .
          <source>Pattern Recognition Letters (Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA '99)</source>
          ,
          <volume>21</volume>
          (
          <fpage>13</fpage>
          -14):
          <fpage>1193</fpage>
          -
          <lpage>1198</lpage>
          ,
          <year>2000</year>
          .
          <string-name>
            <given-names>B.K.</given-names>
            <surname>Ersboll</surname>
          </string-name>
          , P. Johansen, Eds.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L.</given-names>
            <surname>Taycher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Cascia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Sclaroff</surname>
          </string-name>
          .
          <article-title>Image digestion and relevance feedback in the ImageRover WWW search engine</article-title>
          . pages
          <fpage>85</fpage>
          -
          <lpage>94</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. W.</given-names>
            <surname>Campbell</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. T.</given-names>
            <surname>Thomas</surname>
          </string-name>
          .
          <article-title>Iterative refinement by relevance feedback in content-based digital image retrieval</article-title>
          . pages
          <fpage>13</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>