<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multimodal Medical Image Retrieval Improving Precision at ImageCLEF 2009</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sa¨ıd Radhouani</string-name>
          <email>radhouan@ohsu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jayashree Kalpathy-Cramer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Steven Bedrick</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Brian Bakke</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>William Hersh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Medical Informatics &amp; Clinical Epidemiology Oregon Health and Science University (OHSU) Portland</institution>
          ,
          <addr-line>OR</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We present results from Oregon Health &amp; Science University's participation in the medical retrieval task of ImageCLEF 2009. This year, we focused on improving retrieval performance, especially early precision, in the task of solving medical multimodal queries. These queries contain visual data, given as a set of image-examples, and textual data, provided as a set of words belonging to three dimensions: Anatomy, Pathology, and Modality. To solve these queries, we use both textual and visual data in order to better interpret the semantic content of the queries. Indeed, using the textual data associated with the image, it is relatively easy to extract anatomy and pathology, but it is challenging to extract the modality, since this is not always explicitly described in the text. To overcome this problem, we utilized the visual data. We combined both text-based and visual-based search techniques to provide a unique ranked list of relevant documents for each query. The obtained results showed that our approach outperforms our baseline by 43% in MAP and 71% in precision at top 5 documents. This is due to the use of domain dimensions and the combination of both visual-based and text-based search techniques.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Advances in digital imaging technologies and the increasing prevalence of Picture
Archival and Communication Systems (PACS) have led to a substantial growth
in the number of digital images stored in hospitals and medical systems in recent
years. Medical images can form an essential component of a patient’s health
records, and the ability to retrieve them can be useful in several tasks, including
diagnosis, education, research, and so on.</p>
      <p>
        Image retrieval systems (IRS) do not currently perform as well as their text
counterparts [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Medical and other IRS have historically relied on indexing
annotations or captions associated with the images. The last few decades, however,
have seen advancements in the area of content-based image retrieval (CBIR)
[
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Although CBIR systems have demonstrated success in fairly constrained
medical domains including pathology, dermatology, chest radiology, and
mammography, they have demonstrated poor performance when applied to databases
with a wide spectrum of imaging modalities, anatomies, and pathologies [
        <xref ref-type="bibr" rid="ref1 ref4 ref5 ref6">1, 4–6</xref>
        ].
      </p>
      <p>
        In this paper, we address the problem of solving medical multimodal queries,
by focusing especially on improving early precision (i.e., precision at top 5
documents). The queries we deal with in ImageCLEF 2009 contain visual data,
given as a set of image-examples, and textual data, provided by a set of words
belonging to the categories Anatomy, Pathology, and Modality. Using only a
visual-based search technique, it is relatively feasible to identify the modality of
a medical image, but it is very challenging to extract from an image the anatomy
or the pathology (e.g., slight fracture of a bone). On the other hand, using a
text-based search technique, it is relatively easy to extract the anatomy and the
pathology from a text, but it is not obvious to identify the modality. Indeed,
this latter is not always explicitly described in the text, since the writer might
use general words, such as “this image ...,” to write their medical report. To
overcome these problems, retrieval performance, especially early precision, can
be improved demonstrably by merging the results of textual and visual search
techniques [
        <xref ref-type="bibr" rid="ref7">7–9</xref>
        ].
      </p>
      <p>In the rest of this paper, we first present a brief description of our system
(Section 2). Sections 3 and 4 are dedicated respectively to our visual-based search
technique and text-based search technique. We describe the official runs in
Section 5, and the corresponding results in Section 6. Finally, we conclude this paper
and provide some perspectives (Section 7).
2</p>
    </sec>
    <sec id="sec-2">
      <title>System Description of Our Adaptive Medical Image</title>
    </sec>
    <sec id="sec-3">
      <title>Retrieval System</title>
      <p>Starting in 2007, we created and have continued to evolve a multimodal image
retrieval system based on an open-source framework that allows the incorporation
of user search preferences. We designed a flexible database schema that allows
us to easily incorporate new collections while facilitating retrieval using both
text and visual techniques. The 2009 ImageCLEF collection consists of 74,902
medical images and annotations associated with them [10]. This collection
contains images and captions from Radiology and Radiographics, two Radiological
Society of North America (RSNA) journals.
2.1</p>
      <sec id="sec-3-1">
        <title>Database and Web Application</title>
        <p>We used the Ruby programming language 1, with the open source Ruby On
Rails2 web application framework. The PostgreSQL3 relational database was
used to store mapping between the images and the various fields associated
with the image. The title, full caption and precise caption, as provided in the
data distribution, were indexed. The captions and titles in the collection are
currently indexed and we continue to add indexable fields for incorporating visual
information. The data distribution included an xml file with the image id, the
captions of the images, the titles of the journal articles in which the image had
appeared and the PubMed ID of the journal article. In addition, a compressed
file containing the approximately 74,900 images was provided.
2.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Query Parser and Search Engine</title>
        <p>Our system presents a variety of search options to the user including Boolean
OR, AND, and “exact match.” There are also options to perform fuzzy searches,
as well as a custom query parser. A critical aspect of our system is the query
parser, written in Ruby. Ferret, a Ruby port of the popular Lucene system,
was used in our system as the underlying search engine. The custom query
parser performs stop-word removal using a specially-constructed list of
stopwords. The custom query parser is highly customizable, and the user has several
configuration options from which to choose. The first such option is modality
limitation. If the user selects this option, the query is parsed to extract the
desired modality, if available. Using the modality fields described in the previous
section, only those images that are of the desired modality are returned. This is
expected to improve the precision, as only images of the desired modality would
be included within the result set. However, there could be a loss in recall if the
process of modality extraction and classification is inaccurate.</p>
        <p>The system is linked to the UMLS Metathesaurus; the user may choose to
perform manual or automatic query expansion using synonyms from the
Metathesarus. In the manual mode, a list of synonyms is presented to the user, which
the user can choose to add to the query. In the automatic mode, all synonyms
of the UMLS preferred term are added to the query. Another configuration
option is the “stem and star” option, in which all the terms in the query are first
stemmed. A wildcard (*) is then appended to the word to allow the search of
words containing the desired root. The last option allows the user to only send
unique terms to the search engine. This can be useful when using the UMLS
option, as many of the synonyms have a lot of overlap in the preferred terms.
1 http://www.ruby-lang.org/
2 http://www.rubyonrails.org/
3 http://www.postgresql.org/</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Modality Classification and Annotation</title>
      <p>Medical images often begin life with rich metadata in the form of DICOM headers
describing their imaging modality or anatomy. However, since most teaching or
on-line image collections are made up of compressed standalone JPEG files, it
is very common for medical images to exist sans metadata. In previous work
[8], we described a modality classifier that could identify the imaging modality
for medical images using supervised machine learning. We extended that work
to the new dataset used for ImageCLEF 2009. We created additional tables
in the database to store image information that was created using a variety
of image processing techniques in MATLAB4. These include color and intensity
histograms as well as measures of texture using gray-level co-occurrence matrices
and discrete cosine transforms. These features can be used to find images that
are visually similar to the query image.</p>
      <p>One of the biggest challenges in creating such a modality classifier is creating
a labeled training dataset of sufficient size and quality. Our system, as previously
described [8], relied on a external training set of modality-labeled images for its
supervised learning. In 2009, as in 2008, we did not use any external databases
for training the modality classifier. Instead, a Ruby text parser was written to
extract the modality from the image captions for all images in the collection
using regular expressions as well as a simple Bayesian classifier.</p>
      <p>Note that the quality and accuracy of these labels are not as good as in
case of the external training set used in previous experiments. Images where a
unique modality could be identified based on the caption were used for training
the modality classifier. Grey scale images were classified into a set of modalities
including x-rays, CT, MRI, ultrasound and nuclear medicine. Color image classes
include gross pathology, microscopy, and endoscopy. The rest of the dataset (i.e.,
images for which zero or more than one modalities were parsed) was classified
using the above classifier. We created two fields in the database for the modality
that were indexed by our search engine. The first field contained the modality
as extracted by the text parser, and the second contained the modality resulting
from the classification process using visual features.
4</p>
    </sec>
    <sec id="sec-5">
      <title>Text Processing and Analysis</title>
      <p>Our text processing module is very simple; after applying a stop-word list, we
index documents and queries using the Vector Space Model (VSM). During the
retrieval process, a list of relevant documents can be ranked with regard to their
relevance to the corresponding query.</p>
      <p>While no specific treatment was applied to documents, we theorized that
it would be useful to use external knowledge to interpret the queries’ semantic
content. Indeed, each query contains a precise description of a user need
materialized by a set of words belonging to three semantic categories: modality of
4 http://www.mathworks.com/
the image (e.g., MRI, x-ray, etc.), anatomy (e.g., leg, head, etc.), and pathology
(e.g., cancer, fracture, etc.). We call these categories “domain dimensions,” and
define them as follow: “A dimension of a domain is a concept used to express
the themes in this domain” [11]. The idea behind our approach is that, in a
given domain, a theme can be developed with reference to a set of dimensions of
this domain. For instance, a Physician wishing to write a report about a
medical image, first they focus on a domain (Medicine), next they refer to specific
dimensions of this domain (e.g., Anatomy), then they choose words from this
dimension (e.g., femur), and finally they write their report.</p>
      <p>In order to resolve CLEF queries, we proposed using the domain
dimensions to interpret their semantic content. To do so, we first need to define the
dimensions. For this purpose, we use external resources, such as ontologies or
thesaurus, to define each dimension by a hierarchy of concepts. Every concept
is denoted by a set of words. Thereafter, to identify dimensions from a query,
we extract query’s words depending on the dimension hierarchy they belong
to. Once dimensions are extracted from each query, we use them to search for
relevant documents. In particular, we use Boolean operators on query’s
dimensions in order to reformulate the initial text of the query and better represent
its semantic content. For instance, if we assume that a relevant document must
contain all the dimensions belonging to the query, we should use the operator
AND between the query’s words that represent these dimensions in order to
query the document collection.</p>
      <p>Mainly, our querying process contains two steps. The first step consists in
using the initial query’s text to search for documents based on the VSM. The
result of this step is a list of ranked documents called D. The second step
consists of selecting, from D, those documents that satisfy the Boolean expression
formulated based on the domain dimensions.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Runs Submitted</title>
      <p>We submitted a total of 9 offical runs. The search options for the different runs
are provided in Table 1. All our runs are automatic; 2 of them are based on
textual data and 8 are based on both textual and visual data (mixed).</p>
      <p>The “ohsu j no mod” run is based on the VSM, where each document/query
is represented by a vector of words. The result of this run, considered as the
baseline, will be compared to those obtained by the other runs, which are based
domain dimensions and/or both textual and visual data.
5.1</p>
      <sec id="sec-6-1">
        <title>Modality Extraction-Based Runs</title>
        <p>We submitted two mixed runs that used the automatically extracted modality
to filter results. The custom query parser first extracted the desired modality
from the query, if it existed. The “ohsu j mod1” run used the custom parser to
remove stop-words from the query and limit the results to the desired modality.
This run was expected to have high precision but potentially lower recall as it
did not use any term expansion. Also, if the modality classier was not accurate
or the modality extraction from the textual query was too strict, the results
could be limited. In order to try to increase the recall, we also submitted a run
labeled “ohsu j umls” where term expansion based on the UMLS metathesaurus
was used.
With a view to defining the domain dimensions, we utilized the UMLS
metathesaurus. The domain dimensions were defined using the UMLS semantic types as
follows:</p>
        <p>Anatomy: “Body Part,” “Organ, or Organ Component,” “Body Space or
Junction,” “Body Location or Region,” and “Cell”
Pathology: “Sign or Symptom,” “Finding,” “Pathologic Function,” “Injury
or Poisoning,” “Disease or Syndrome,” “Neoplastic Process,” “Neoplasms,”
“Anatomical Abnormality,” “Congenital Abnormality,” adn “Acquired
Abnormality”
Modality: “Manufactured Object,” and “Diagnostic Procedure”</p>
        <p>After automatically extracting the dimensions from each query, we used them
to perform the following runs.</p>
        <p>OHSU SR1: We use dimensions to rank the retrieved documents for a given
query. Documents that contain the three dimensions belonging to the query are
considered as most relevant and are therefore highly ranked. They are followed by
those documents that are only missing the modality. Indeed, the latter dimension
is not always explicitly described in the text, since the writer might use general
words, such as “this image ...,” to write their medical report. Finally, in the third
rank, we find documents that contain at least one of the query dimensions.</p>
        <p>Since the modality is not always explicitly described in the text, we use the
visual data to extract it from images (Section 3). Thereafter, we use it to re-rank
the document list obtained using the textual data. In particular, documents of
which the modality has been extracted from image are ranked on the top. In all
the following runs, we apply this technique using the result of the “ohsu j mod1”
run. For simplicity of writing, we call this process “checking modality technique.”</p>
        <p>OHSU SR2: Re-rank the document list obtained during run OHSU SR1 by
applying the “checking modality technique.”</p>
        <p>OHSU SR3: Re-rank the document list obtained during run ohsu j no mod
by applying the “checking modality technique.”</p>
        <p>OHSU SR4: We select, for each query, those documents that contain the
Anatomy and the Pathology belonging to this query. The obtained result is
re-ranked by applying the “checking modality technique.”</p>
        <p>OHSU SR5: From the result of the “OHSU SR2” run, we randomly select
one image from each article. Indeed, in CLEF documents, a textual article might
contain more than one image. In this run, if an article is retrieved by the textual
approach, we randomly select one of its images (instead of keeping all images as
we have been doing for the other runs).</p>
        <p>OHSU SR6: We select from the result of the “OHSU SR1” run, only those
documents of which the modality has been extracted during the “ohsu j mod1”
run.
6</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Results and Discussion</title>
      <p>For each query, the results are measured by mean average precision of top 1000
documents (MAP), precision at 10 documents (p@10), and precision at 5
documents (p@5). The results given by the baseline run are 0.1223 (MAP), 0.416
(p@10), and 0.38 (p@5). Obtained results are presented in Table 2 where rows
correspond to the runs, and values correspond to their corresponding results. We
also include data from the best runs (based on MAP and p@5) in ImageCLEF
2009 campaign.</p>
      <p>As described above, our system has been designed to improve precision,
perhaps at the expense of recall. Since we do not use any advanced natural language
processing and we filter images based on purported modality, we were
expecting a relatively low recall. Consequently, as the MAP is highly dependent and
limited by recall, we believe that it makes more sense to compare our results to
those obtained by the other ImageCLEF 2009’s participants at an early
precision level (p@5 or p@10). We have a high early precision (p@5 = 0.712) obtained
by performing the “ohsu j umls” run. This is the second best result obtained in
the ImageCLEF 2009 campaign; the first being equal to 0.744 obtained by an
interactive run. In fact, it had the highest p@5 for automatic runs in
ImageCLEF2009.</p>
      <p>Independent of the other ImageCLEF 2009 participants, most of our runs
outperform our baseline, reaching an improvement of 43% in MAP (OHSU SR1)
and 71% in p@5 (ohsu j umls). From the result of the “OHSU SR1” run, we
notice that the use of domain dimensions is of great interest in solving
medical multimodal queries. Indeed, by using domain dimensions, we highlight the
“relevant words” that describe the queries’ semantic content. Using these words,
the system can retrieve only documents that contain the anatomy, the modality,
and the pathology described in the query text.</p>
      <p>
        We notice that at p@5 and p@10, all our mixed runs outperform our baseline.
This is not surprising, because the first ranked documents are those that have
been retrieved both by the text-based search technique and the visual-based
search technique. This supports our previous conclusions, which confirm that
the retrieval performance can be improved demonstrably by merging the results
of textual and visual search techniques [
        <xref ref-type="bibr" rid="ref7">7–9</xref>
        ].
      </p>
      <p>Compared to the baseline, our results decreased in two runs in terms of MAP.
First, in the “OHSU SR4” run, where the modality dimension was ignored
during the querying process. This decrease might be explained by the fact that the
modality is described in some documents, and its use is thought to beneficial.
This is notably obvious in the “OHSU SR1” run when we used the three
dimensions and obtained the highest performance. Secondly, in the “OHSU SR5”
run, only one image was randomly selected from each article. An increase in the
performance would have been surprising, since this technique is not accurate at
all, and there is a high risk that the selected image is irrelevant. Indeed, it is
better to keep all images of each article, thus, if one of them is relevant to the
corresponding query, it will be retrieved.
7</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusions and Future Work</title>
      <p>In order to improve early precision in the task of solving medical multimodal
images, we combined a text-based search technique with a visual-based one. The
first technique consists in using domain dimensions to highlight relevant words
that describe the queries’ semantic content. While anatomy and pathology are
relatively easy to identify from textual documents, it is quite challenging to
identify the modality dimension. To overcome this problem, we used a
visualbased search technique that allows automatically extracting the modality from
images. The obtained results in terms of precision at p@5 and p@10 are very
encouraging and outperform our baseline.</p>
      <p>Among the ImageCLEF 2009’s particiants, we obtained the second best
overall and best automatic result in terms of precision at p@5. However, in terms
of MAP, even though our results outperform our baseline, they are significantly
below the best performance obtained in ImageCLEF 2009. This was expected,
since we were only focused on improving early precision and did not use any
advanced natural language processing to improve the recall. For our future work,
we will focus on this issue. We believe that our current text-based search
technique has room for improvement. We plan to use further textual processing,
such as term expansion and pseudo-relevance feedback, in order to improve our
recall, and hope to be able to compare our results to the best ImageCLEF 2009’
performance.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgements</title>
      <p>We acknowledge the support of NLM Training Grant 2T15LM007088, NSF
Grant ITR-0325160, and the Swiss National Science Foundation grant
PBGE22121204.
8. Kalpathy-Cramer, J., Hersh, W.: Automatic image modality based classification
and annotation to improve medical image retrieval. Studies in Health Technology
and Informatics 129(Pt 2) (2007) 1334–8 PMID: 17911931.
9. Radhouani, S., HweeLim, J., pierre Chevallet, J., Falquet, G.: Combining textual
and visual ontologies to solve medical multimodal queries. Multimedia and Expo,
IEEE International Conference on 0 (2006) 1853–1856
10. MA˜ 14 ller, H., Kalpathy-Cramer, J., Eggel, I., Bedrick, S., Radhouani, S., Bakke,
B., Jr., C.K., Hersh, W.: Overview of the medical retrieval task at imageclef 2009.</p>
      <p>Working Notes of the CLEF 2009 workshop, Corfu, Greece (2009)
11. Radhouani, S.: Un mod`ele de recherche d’information orient´e pr´ecision fond´e sur
les dimensions de domaine. PhD thesis, University of Geneva, Switzerland, and
University of Grenoble, France (2008)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Hersh</surname>
            ,
            <given-names>W.R.</given-names>
          </string-name>
          , Mu¨ller, H.,
          <string-name>
            <surname>Jensen</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorman</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruch</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Advancing biomedical image retrieval: Development and analysis of a test collection</article-title>
          .
          <source>J Am Med Inform Assoc (June</source>
          <year>2006</year>
          ) M2082
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Smeulders</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Worring</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Santini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jain</surname>
          </string-name>
          , R.:
          <article-title>Content-based image retrieval at the end of the early years</article-title>
          .
          <source>Pattern Analysis and Machine Intelligence</source>
          , IEEE Transactions on
          <volume>22</volume>
          (
          <issue>12</issue>
          ) (
          <year>2000</year>
          )
          <fpage>1349</fpage>
          -
          <lpage>1380</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Tagare</surname>
            ,
            <given-names>H.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaffe</surname>
            ,
            <given-names>C.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duncan</surname>
          </string-name>
          , J.:
          <article-title>Medical image databases: A content-based retrieval approach</article-title>
          .
          <source>J Am Med Inform Assoc</source>
          <volume>4</volume>
          (
          <issue>3</issue>
          ) (May
          <year>1997</year>
          )
          <fpage>184</fpage>
          -
          <lpage>198</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Aisen</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Broderick</surname>
            ,
            <given-names>L.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winer-Muram</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brodley</surname>
            ,
            <given-names>C.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kak</surname>
            ,
            <given-names>A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavlopoulou</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shyu</surname>
            ,
            <given-names>C.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marchiori</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Automated storage and retrieval of thin-section ct images to assist diagnosis: System description and preliminary assessment</article-title>
          .
          <source>Radiology</source>
          <volume>228</volume>
          (
          <issue>1</issue>
          )
          <issue>(</issue>
          <year>July 2003</year>
          )
          <fpage>265</fpage>
          -
          <lpage>270</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Schmid-Saugeona</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guillodb</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thirana</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          :
          <article-title>Towards a computer-aided diagnosis system for pigmented skin lesions</article-title>
          .
          <source>Computerized Medical Imaging and Graphics: The Official Journal of the Computerized Medical Imaging Society</source>
          <volume>27</volume>
          (
          <issue>1</issue>
          ) (
          <year>2003</year>
          )
          <fpage>65</fpage>
          -
          <lpage>78</lpage>
          PMID:
          <fpage>12573891</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Mu¨ller, H.,
          <string-name>
            <surname>Michoux</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bandon</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geissbuhler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A review of content-based image retrieval systems in medical applications-clinical benefits and future directions</article-title>
          .
          <source>International Journal of Medical Informatics</source>
          <volume>73</volume>
          (
          <issue>1</issue>
          ) (
          <year>February 2004</year>
          )
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hersh</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalpathy-Cramer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jensen</surname>
          </string-name>
          , J.:
          <article-title>Medical image retrieval and automated annotation: Ohsu at imageclef 2006</article-title>
          . In Peters,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Clough</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Gey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.C.</given-names>
            ,
            <surname>Karlgren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Magnini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Oard</surname>
          </string-name>
          , D.W., de Rijke,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stempfhuber</surname>
          </string-name>
          , M., eds.
          <source>: CLEF</source>
          . Volume
          <volume>4730</volume>
          of Lecture Notes in Computer Science., Springer (
          <year>2006</year>
          )
          <fpage>660</fpage>
          -
          <lpage>669</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>