<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using medGIFT and easyIR for the ImageCLEF 2005 evaluation tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Henning Muller</string-name>
          <email>henning.mueller@sim.hcuge.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antoine Geissbuhler</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Johan Marty</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Lovis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patrick Ruch</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>24 Rue Micheli-du-Crest</institution>
          ,
          <addr-line>CH-1211 Geneva 4</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University and University Hospitals of Geneva, Service of Medical Informatics</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>H.3 [Information Storage and Retrieval]: H.3.1 Content Analysis and Indexing; H.3.3 Information Search and Retrieval; H.3.4 Systems and Software; H.3.7 Digital Libraries General Terms</p>
      </abstract>
      <kwd-group>
        <kwd>Image retrieval</kwd>
        <kwd>evaluation</kwd>
        <kwd>visual retrieval</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>This article describes the use of the medGIFT retrieval system for three of the four
ImageCLEF 2005 retrieval tasks. We participated in the ad{hoc retrieval task that
was similar to the 2004 ad{hoc task, the new medical retrieval task that required much
more semantic analysis of the textual annotation than in 2004 and the new automatic
annotation task. The techniques used in 2005 are fairly similar to the 2004 techniques
for the two retrieval tasks. For the automatic annotation task, scripts were optimised
to allow classi cation with a retrieval system. Unfortunately, an error in the text
retrieval system corrupted part of our runs and led to relatively bad results for all runs
including text. This error should be xed before the nal proceedings are printed, so
correct gures are expected for this.</p>
      <p>All retrieval results rely heavily on two retrieval systems: for visual retrieval we use
the GNU Image Finding Tool (GIFT ), and for textual retrieval the EasyIR retrieval
system. For the ad-hoc retrieval task, two runs were submitted with di erent con
gurations of grey levels and the Gabor lters. No textual retrieval was attempted, but
only purely visual retrieval, resulting in generally lower scores than text retrieval. For
the medical retrieval task, visual retrieval was performed with several con gurations
of Gabor lters and grey level and color quantisations as well as several variations of
combining text and visual features. Unfortunately, all these runs are broken as the
textual retrieval results are almost random. Due to a lack of resources no relevance
feedback runs were submitted, which is where medGIFT performed best in 2004. For
the classi cation task, a retrieval with the image to classify was performed and the rst
N = 1; 5; 10 resulting images were used to calculate scores for the classes by simply
adding up the score of the N-images for each class. No machine learning was performed
on the data of the known classes, so the results are surprisingly good and were only
topped by systems with sophisticated learning strategies optimised for the used data
set.</p>
    </sec>
    <sec id="sec-2">
      <title>Categories and Subject Descriptors</title>
      <sec id="sec-2-1">
        <title>Introduction</title>
        <p>
          Image retrieval is an increasingly important domain in the eld of information retrieval. Compared
to text retrieval little is known about how to search for images, although it has been an extremely
active domain as well in the eld of computer vision as in information retrieval [
          <xref ref-type="bibr" rid="ref12 ref13 ref8">8, 12, 13, 16</xref>
          ].
Benchmarks such as ImageCLEF [
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          ] allow us to actually evaluate our algorithms compared to
other systems and deliver us an insight into the techniques that perform well and those that do
not perform as good. Thus, new developments can be directed towards these goals and techniques
of other well{performing systems can be adapted to our needs.
        </p>
        <p>In 2005, the ad{hoc retrieval task created topics that were better adapted for visual systems
using the same database as in 2004. The tasks made available contained three images, so more
visual information. We submitted two con gurations of our system to this task using visual
information only.</p>
        <p>
          The medical retrieval task was performed on a much larger database than in 2004 containing
a total of more than 50.000 images [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. The annotation was also more varied, ranging from a
few words in a very structured form to completely unstructured paragraphs. This made it hard
to preprocess any of the information, so nally only free{text retrieval was used for our results
submission including all XML tags. Also, the tasks were much harder and mainly semantic query
tasks, which made the retrieval by visual means more di cult. Due to a lack of resources we
could only submit partial results that did not include any relevance feedback or automatic query
expansion.
        </p>
        <p>
          The automatic annotation task was very interesting and challenging at the same time [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. We
did not take into account any of the training data and simply used the retrieval system GIFT
and a nearest neighbour technique to classify the results. Still, the results were surprisingly good
(6th best submission, 2nd best group) and when taking into account the learning data using an
approach as described in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], these results are expected to get better.
        </p>
        <p>ImageCLEF gave us the opportunity to compare our system with other techniques which is
invaluable and will provide us with directions for future research.
2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Basic Technologies Used</title>
        <p>For our ImageCLEF participation, we aim at combining content-based retrieval of images with
cross{language retrieval applied on the textual annotation of the images. Based on the results
from last year (2004), we used parameters that were expected to lead to good results, plus some
new combinations.
2.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Image Retrieval</title>
      <p>
        The technology used for the content{based retrieval of images is mainly taken from the Viper 1
project of the University of Geneva. Much information about this system is available [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Outcome
of the Viper project is the GNU Image Finding Tool, GIFT 2. This software tool is open source
and can in consequence also be used by other participants of ImageCLEF. A ranked list of visually
similar images for every query topic was made available for participants and will serve as a baseline
to measure the quality of submissions. Demonstration versions with a web{accessible interface of
GIFT were also made available for participants to query visually as with feedback in an interactive
way as not everybody can be expected to install an entire Linux tool for such a benchmark to use
GIFT. The feature sets that are used by GIFT are:
      </p>
      <p>Local color features at di erent scales by partitioning the images successively into four
equally sized regions (four times) and taking the mode color of each region as a descriptor;
1http://viper.unige.ch/
2http://www.gnu.org/software/gift/
global color features in the form of a color histogram, compared by a simple histogram
intersection;
local texture features by partitioning the image and applying Gabor lters in various scales
and directions. Gabor responses are quantised into 10 strengths;
global texture features represented as a simple histogram of responses of the local Gabor
lters in various directions and scales.</p>
      <p>
        A particularity of GIFT is that it uses many techniques well{known from text retrieval. Visual
features are quantised and the feature space is very similar to the distribution of words in texts,
corresponding roughly to a Zipf distribution. A simple tf/idf weighting is used and the query
weights are normalised by the results of the query itself. The histogram features are compared
based on a histogram intersection [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        The medical version of the GIFT is called medGIFT 3 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. It is also accessible as open source
and adaptations concern mainly visual features and the user interface that shows the diagnosis on
screen and is linked with a radiologic teaching le so the MD can not only browse images but also
get the textual data and other images of the same case. Grey levels play a more important role
for medical images and their numbers are raised, especially for relevance feedback (RF) queries.
The number of the Gabor lter responses also has an impact on the performance and these are
changed with respect to directions and scales. We used in total 4, 8 and 16 grey levels and for the
Gabor lters we used 4 and 8 directions. Other techniques in medGIFT such as a pre{treatment
of images [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] were not used for this competition due to a lack of resources.
2.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>Textual Search</title>
      <p>The basic granularity of the Casimage and MIR collections is the case. A case gathers a textual
report, and a set of images. For the PathoPic and Peir databases annotation exists for every image.
The queries contain one to three images and text in three languages. We used all languages as
a single query and also indexed all documents in a single index. Case{based annotation was
expanded to all images of the case after the retrieval step, so for us the nal unit of retrieval is
the image.
2.2.1</p>
      <sec id="sec-4-1">
        <title>Indexes</title>
        <p>
          Textual experiments were conducted with the easyIR engine4. As a single report is able to contain
written parts in several languages mixed, it would have been necessary to detect the boundaries of
each language segment. Ideally, French, German and English textual segments would be stored in
di erent indexes. Each index could have been translated into the other language using a general
translation method, or more appropriately using a domain-adapted method [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. However, such a
complex architecture would require to store di erent segments of the same document in separate
indexes. Considering the lack of data to tune the system, we decided to index all collections using
a unique index using an English stemmer, For simplicity reasons, the XML tags were also indexed
and not separately treated.
2.2.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Weighting Schema</title>
        <p>We chose a generally good weighting schema of the term frequency - inverse document frequency
family. Following weighting convention of the SMART engine, cf. Table 1, we used atc-ltn
parameters, with = = 0.5 in the augmented term frequency.</p>
        <p>3http://www.sim.hcuge.ch/medgift/
4http://lithwww.epfl.ch/~ruch/softs/softs.html</p>
        <p>First Letter
n (natural)
l (logarithmic)
a (augmented)
Second Letter
n(no)
t(full)
Third Letter</p>
        <p>n(no)
c(cosine)</p>
        <p>Term Frequency
f(tf)</p>
        <p>tf
1 + log(tf)
+ ( matxf(tf) ), where = 1
Inverse Document Frequency
f( d1f )</p>
        <p>1
log( dNf )
Normalisation
f(length)</p>
        <p>
          1
qPi=1 wi2;j
t
qPj=1 wj2;q
t
and 0 &lt;
Combinations of visual and textual features for retrieval are rather scarce in the literature [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], so
many of the mechanism and ne tuning of the combinations will still need more work, especially
when the optimisation is based on the actual query. For the visual query we used all images that
are present for a query, including one query containing negative feedback. For the text part, the
text of all three languages was used as a combined query together with the combined index that
includes the documents in all languages. Results list of the rst 1000 documents were taken into
account for both the visual and the textual search. Both result lists were normalised to deliver
results within the range [0; 1]. The visual result is normalised by the result of the query itself
whereas the text was normalised by the document with the highest score. This leads to visual
results that are usually slightly lower than the textual results.
        </p>
        <p>To combine the two lists, two di erent methods were chosen. The rst one simply combines
the list with di erent percentages for visual and textual results (textual= 50, 33, 25, 10%). In a
second form of combination the list of the rst 1000 visual results was taken, and then, all those
that were in the rst 200 textual documents were multiplied with N{times the value of the textual
results.
3</p>
        <sec id="sec-4-2-1">
          <title>The ad hoc retrieval task</title>
          <p>For the ad{hoc retrieval task we submitted results using fairly similar techniques as those in
2004. The 2005 topics were actually more adapted to the possibilities of visual retrieval systems
as more visual attributes were taken into account for the topic creation. Still, textual retrieval
stays very necessary for good results. It is not so much a problem of the queries but rather a
problem of the database containing mostly grey or brown scale images of varying quality where
automatic treatment such as color indexing is di cult. This should change in 2006 with a new
database using mostly consumer pictures of vacation destinations. Such a database could be better
analysed automatically using the available color information</p>
          <p>We used the GIFT system in two con gurations, once the normal GIFT engine with 4 grey
levels and the full HSV space using the Gabor lter responses in four directions and at three scales.
The second con guration took into account 8 grey levels as the 2004 results for 16 grey levels were
actually much worse than expected. We also raised the number of directions of the Gabor lters
to 8 instead of four. The results of the basic GIFT system were made available to all participants
and used by several. Surprisingly the results of the basic GIFT system remain the best in the
test with a MAP of 0.0829, being at the same time the best purely visual system participating.
The system with eight grey levels and eight directions for the Gabor lters performed slightly
worse and a MAP of 0.0819 was reached. Other visual systems performed slightly lower. The best
mono{lingual text systems performed at a MAP of 0.41. Several text retrieval systems performed
worse than the visual system for a variety of languages.
4</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>The automatic annotation task</title>
          <p>We were new to the automatic annotation task as almost everyone and had mainly used our system
for retrieval, so far. Due to a lack of resources no optimisation using the available training data
was performed. Still, the tf/idf weighting is automatically weighting rare features higher which
leads to a discriminative analysis.</p>
          <p>As techniques we performed a query with each of the 1000 images to classify and took into
account the rst N = 1; 5; 10 retrieval results. For each of these results images from the training
set the correct class was determined and this class was thus augmented with the similarity score
of the image. The class with the highest nal score became automatically the nal class selected
for the image. For retrieval we used three di erent settings of the features using 4, 8, and 16
grey levels. The runs with 8 and 16 grey levels also had eight directions of the Gabor lters for
indexation. Best results obtained in the competition were from the Aachen groups (best run at
12.6% error rate) that have been working on very similar data for several years, now.</p>
          <p>The best results for our system were retrieved when using 5NN and eight grey levels (error rate
20.6%), and the next best results using 5NN and 16 grey levels (20.9). Interestingly, the worst
results were obtained with 5NN and 4 grey levels (22.1). Using 10NN led to slightly worse results
(21.3) and 1NN was rather in the middle (4 grey levels 21.8; 8 grey levels: 21.1; 16 grey levels
21.7).</p>
          <p>As a result we can say that all results are extremely close together 20.6-22.1 %, so the di erences
do not seem statistically signi cant. 5NN seems to be the best but this might also be linked to
the fact that some classes have a very small population and 10NN would simply retrieve too many
image of other classes to be competitive. 8 levels of grey and 8 directions of the Gabor lters seem
to perform best, but the di erences are still very small.</p>
          <p>
            In the future it is planned to train the system with the available training data using the
algorithm described in [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. This technique is similar to the market basket analysis [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]. A proper
strategy for the training needs to be developed to especially help smaller classes to be well classi ed.
These classes cause normally most of the classi cation problems.
5
          </p>
        </sec>
        <sec id="sec-4-2-3">
          <title>The medical retrieval task</title>
          <p>Unfortunately, our textual retrieval results submitted contained an indexation error, and so the
textual results were almost random. We have not identi ed the error yet, but hope to have it
found before the nal proceedings are printed. Thus the only textual run that we submitted had
only a MAP of 0.0226, whereas the best textual retrieval systems were at 0.2084 (IPAL/I2R).
Due to a limitation of resources, we were not able to submit relevance feedback runs, which is the
discipline where GIFT usually is strongest. The best feedback system was OHSU with a map of
0.2116 for only textual retrieval.</p>
          <p>The best visual system is I2R with a map of 0.1455. Our GIFT retrieval system was made
available to participants and was widely used. Again, the basic GIFT system obtained the best
results among the various combinations in feature space (map 0.0941), with only i2r having actually
better results. The second indexation using 8 grey levels and eight directions of the Gabor lters
performs slightly worse at 0.0872.</p>
          <p>For mixed textual/visual retrieval, the best results were obtained by IPAL/I2R with map
0.2821. Our best result in this category is using 10% textual part and 90% visual part and obtains
0.0981. These results should be much better when using a properly indexed text base. The
following results were obtained for other combinations: 20% visual: 0.0934, 25%: 0.0929, 33%:
0.0834, 50%: 0.044. When using eight grey levels and 8 Gabor directions: 10% visual: 0.0891, 20%:
0.084, 33%: 0.075, 50%: 0.0407. The results could lead to the assumption that visual retrieval
is better than textual retrieval in our case, but this holds only true because of our indexation
error. We will try to x the error and deliver proper results as soon as possible to have a correct
comparison with the other groups.</p>
          <p>A second combination technique that we applied used as a basis the results from textual
retrieval and then added the visual retrieval results multiplied with a factor N = 2; 3; 4 to the
rst 1000 results of textual retrieval. This strategy proved fruitful in 2004 the other way round by
taking rst the visual results and then augmenting only the rst N=1000 results. The results for
the main GIFT system were: 3 times visual: 0.0471, 4 times visual 0.0458, 2 times visual 0.0358.
For the system with 8 grey levels, the respective results are: 3 times visual 0.0436, 4 times visual
0.0431, 2 times visual 0.0237. A reverse order of taking the visual results rst and then augment
the textually similar would have led to better results in this case but when having correct results
for text as well as for visual retrieval, this needs to be proven.</p>
          <p>We cannot really deduct extremely much of our current submission as several errors prevented
better results.
6</p>
        </sec>
        <sec id="sec-4-2-4">
          <title>Conclusions</title>
          <p>Although we did not have any resources for an optimised submission we still learned from the
2005 tasks that the GIFT system delivers a good baseline for image retrieval and that it is widely
usable for a large number of tasks and di erent images.</p>
          <p>More detailed results show that the ad{hoc task is hard for visual retrieval even with a more
visually{friendly set of queries as the image set does not contain enough color information or clear
objects, which is crucial for fully visual information retrieval.</p>
          <p>The automatic annotation or classi cation task proved that our system delivers good results
even without learning and shows that information retrieval can also be used well for document
classi cation. When taking into account the available training data these results will surely improve
signi cantly.</p>
          <p>From the medical retrieval task not much can be deduced for now as we need to work on our
textual indexation and retrieval to nd the error responsible for the mediocre results. Still, we can
say that GIFT is well suited and among the best systems for general visual retrieval. It will need
to be analysed which features were used by other systems, especially the few runs that performed
better.</p>
          <p>For next year we will de nitely have to take into account the available training data and we
hope as well to use more complex algorithms for example to extract objects form the medical
images and limit retrieval to theses objects. Another strong point of GIFT is the good relevance
feedback and this can surely improve results signi cantly as well. Already the fact to have a
similar databases for two years in a row would help as such large databases need a large time to
be indexed and require human resources for optimisation as well.
7</p>
        </sec>
        <sec id="sec-4-2-5">
          <title>Acknowledgements</title>
          <p>Part of this research was supported by the Swiss National Science Foundation with grant
632066041.
[16] Hemant D. Tagare, C. Ja e, and James Duncan. Medical image databases: A content{based
retrieval approach. Journal of the American Medical Informatics Association, 4(3):184{198,
1997.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Rakesh</given-names>
            <surname>Agrawal</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ramakrishnan</given-names>
            <surname>Srikant</surname>
          </string-name>
          .
          <article-title>Fast algorithms for mining association rules</article-title>
          .
          <source>In Proceedings of the 20th VLDB Conference</source>
          , pages
          <volume>487</volume>
          {
          <fpage>499</fpage>
          ,
          <string-name>
            <surname>Santiago</surname>
          </string-name>
          , Chile,
          <source>September</source>
          <volume>12</volume>
          {
          <fpage>15</fpage>
          1994.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Paul</given-names>
            <surname>Clough</surname>
          </string-name>
          , Henning Muller, and
          <string-name>
            <given-names>Mark</given-names>
            <surname>Sanderson</surname>
          </string-name>
          .
          <article-title>Overview of the CLEF cross{language image retrieval track (ImageCLEF) 2004</article-title>
          . In Carol Peters,
          <string-name>
            <given-names>Paul D.</given-names>
            <surname>Clough</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Gareth J. F.</given-names>
            <surname>Jones</surname>
          </string-name>
          , Julio Gonzalo,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kluck</surname>
          </string-name>
          , and B. Magnini, editors,
          <source>Multilingual Information Access for Text</source>
          ,
          <article-title>Speech and Images: Result of the fth CLEF evaluation campaign</article-title>
          , Lecture Notes in Computer Science, Bath, England,
          <year>2005</year>
          . Springer{Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Paul</given-names>
            <surname>Clough</surname>
          </string-name>
          , Mark Sanderson, and
          <article-title>Henning Muller. A proposal for the CLEF cross language image retrieval track (ImageCLEF) 2004</article-title>
          .
          <article-title>In The Challenge of Image and Video Retrieval (CIVR</article-title>
          <year>2004</year>
          ), Dublin, Ireland,
          <year>July 2004</year>
          . Springer LNCS 3115.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>William</given-names>
            <surname>Hersh</surname>
          </string-name>
          , Henning Muller, Paul Gorman, and
          <article-title>Je ery Jensen</article-title>
          .
          <article-title>Task analysis for evaluating image retrieval systems in the ImageCLEF biomedical image retrieval task</article-title>
          .
          <source>In Slice of Life conference on Multimedia in Medical Education (SOL</source>
          <year>2005</year>
          ), Portland,
          <string-name>
            <surname>OR</surname>
          </string-name>
          , USA,
          <year>June 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Marco</given-names>
            <surname>La</surname>
          </string-name>
          <string-name>
            <surname>Cascia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Saratendu</given-names>
            <surname>Sethi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Stan</given-names>
            <surname>Sclaro</surname>
          </string-name>
          .
          <article-title>Combining textual and visual cues for content{based image retrieval on the world wide web</article-title>
          .
          <source>In IEEE Workshop on Content{based Access of Image and Video Libraries (CBAIVL'98)</source>
          , Santa Barbara, CA, USA, June 21
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Thomas</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Lehmann</surname>
          </string-name>
          , Mark O. Guld, Thomas Deselaers, Henning Schubert, Klaus Spitzer, Hermann Ney, and
          <string-name>
            <surname>Berthold</surname>
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Wein</surname>
          </string-name>
          .
          <article-title>Automatic categorization of medical images for content{ based retrieval and data mining</article-title>
          .
          <source>Computerized Medical Imaging and Graphics</source>
          ,
          <volume>29</volume>
          :
          <fpage>143</fpage>
          {
          <fpage>155</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Henning</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>ller, Joris Heuberger, and Antoine Geissbuhler. Logo and text removal for medical image retrieval</article-title>
          .
          <source>In Springer Informatik aktuell: Proceedings of the Workshop Bildverarbeitung fur die Medizin</source>
          , Heidelberg, Germany,
          <year>March 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Henning</given-names>
            <surname>Mu</surname>
          </string-name>
          ller, Nicolas Michoux, David Bandon,
          <string-name>
            <given-names>and Antoine</given-names>
            <surname>Geissbuhler</surname>
          </string-name>
          .
          <article-title>A review of content{based image retrieval systems in medicine { clinical bene ts and future directions</article-title>
          .
          <source>International Journal of Medical Informatics</source>
          ,
          <volume>73</volume>
          :1{
          <fpage>23</fpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Henning</given-names>
            <surname>Mu</surname>
          </string-name>
          ller, Antoine Rosset,
          <string-name>
            <surname>Jean-Paul Vallee</surname>
            , and
            <given-names>Antoine</given-names>
          </string-name>
          <string-name>
            <surname>Geissbuhler</surname>
          </string-name>
          .
          <article-title>Integrating content{based visual access methods into a medical case database</article-title>
          .
          <source>In Proceedings of the Medical Informatics Europe Conference (MIE</source>
          <year>2003</year>
          ), St. Malo, France, May
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Henning</given-names>
            <surname>Mu</surname>
          </string-name>
          <article-title>ller, David McG</article-title>
          . Squire, and
          <string-name>
            <given-names>Thierry</given-names>
            <surname>Pun</surname>
          </string-name>
          .
          <article-title>Learning from user behavior in image retrieval: Application of the market basket analysis</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <volume>56</volume>
          (
          <issue>1</issue>
          {2):
          <volume>65</volume>
          {
          <fpage>77</fpage>
          ,
          <year>2004</year>
          .
          <article-title>(Special Issue on Content{Based Image Retrieval)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Patrick</given-names>
            <surname>Ruch</surname>
          </string-name>
          .
          <article-title>Query translation by text categorization</article-title>
          .
          <source>In Proceedings of the conference on Computational Linguistics (COLING</source>
          <year>2004</year>
          ), Geneva, Switzerland,
          <year>August 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Yong</surname>
            <given-names>Rui</given-names>
          </string-name>
          , Thomas S. Huang,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Ortega</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Sharad</given-names>
            <surname>Mehrotra</surname>
          </string-name>
          .
          <article-title>Relevance feedback: A power tool for interactive content{based image retrieval</article-title>
          .
          <source>IEEE Transactions on Circuits and Systems for Video Technology</source>
          ,
          <volume>8</volume>
          (
          <issue>5</issue>
          ):
          <volume>644</volume>
          {
          <fpage>655</fpage>
          ,
          <year>September 1998</year>
          .
          <article-title>(Special Issue on Segmentation, Description, and Retrieval of Video Content)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Arnold W. M. Smeulders</surname>
            , Marcel Worring, Simone Santini, Armarnath Gupta, and
            <given-names>Ramesh</given-names>
          </string-name>
          <string-name>
            <surname>Jain</surname>
          </string-name>
          .
          <article-title>Content{based image retrieval at the end of the early years</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>22</volume>
          No 12:
          <fpage>1349</fpage>
          {
          <fpage>1380</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>David</given-names>
            <surname>McG. Squire</surname>
          </string-name>
          , Wolfgang Muller, Henning Muller, and Thierry Pun.
          <article-title>Content{based query of image databases: inspirations from text retrieval</article-title>
          .
          <source>Pattern Recognition Letters (Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA '99)</source>
          ,
          <volume>21</volume>
          (
          <fpage>13</fpage>
          - 14):
          <volume>1193</volume>
          {
          <fpage>1198</fpage>
          ,
          <year>2000</year>
          .
          <string-name>
            <given-names>B.K.</given-names>
            <surname>Ersboll</surname>
          </string-name>
          , P. Johansen, Eds.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Michael</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Swain</surname>
          </string-name>
          and
          <string-name>
            <surname>Dana H. Ballard</surname>
          </string-name>
          . Color indexing.
          <source>International Journal of Computer Vision</source>
          ,
          <volume>7</volume>
          (
          <issue>1</issue>
          ):
          <volume>11</volume>
          {
          <fpage>32</fpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>