<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>MIRACLE's Combination of Visual and Textual Queries for Medical Images Retrieval</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Julio Villena-Román</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José Carlos González-Cristóbal</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José Miguel Goñi-Menoyo</string-name>
          <email>josemiguel.goni@upm.es</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José Luís Martínez-Fernandez</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan José Fernández</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Universidad Carlos III de Madrid</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Universidad Politécnica de Madrid</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>DAEDALUS - Data</string-name>
          <email>jmartinez@daedalus.es</email>
          <email>jvillena@daedalus.es</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Decisions</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Language</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>This paper presents the 2005 MIRACLE's team participation in the ImageCLEFmed task of ImageCLEF 2005. This task certainly requires the use of image retrieval techniques and therefore it is mainly aimed at image analysis research groups. Although our areas of expertise don't include image analysis research, we decided to make the effort to participate in this task to promote and encourage multidisciplinary participation in all aspects of information retrieval, no matter if it is text or content based. We resort to a publicly available image retrieval system (GIFT [1]) when needed.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The MIRACLE team is made up of three university research groups located in Madrid (UPM, UC3M and UAM)
along with DAEDALUS, a company founded in 1998 as a spin-off of two of these groups. DAEDALUS is a
leading company in linguistic technologies in Spain and is the coordinator of the MIRACLE team. This is the
third participation in CLEF, after years 2003 and 2004 [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ],[10],[9],[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. As well as bilingual, monolingual
and cross lingual tasks, the team has participated in the ImageCLEF, Q&amp;A, WebCLEF and GeoCLEF tracks.
This paper describes our participation in the ImageCLEFmed task of ImageCLEF 2005. This task certainly
requires the use of image retrieval techniques and therefore it is mainly aimed at image analysis research groups.
Although our areas of expertise don’t include image analysis research, we decided to make the effort to
participate in this task to promote and encourage multidisciplinary participation in all aspects of information
retrieval, no matter if it is text or content based. We resort to a publicly available image retrieval system (GIFT
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) when needed.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Task goals</title>
      <p>
        Image or multimedia retrieval is interesting for the domain of cross-language information retrieval as the media
such as images are inherently almost insensitive to language. Many collections exist on the Internet which
contain images as well as multilingual texts. However, the retrieval of images is an often-neglected topic in the
information retrieval domain. In particular, hospitals produce an enormous amount of visual data but tools to
manage these images and videos are scarce and exist currently only as research prototypes [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
The main goal of ImageCLEFmed task is to improve the retrieval of medical images from heterogeneous and
multilingual document collections containing images as well as text. The task is somewhat similar to the classic
TREC ad hoc retrieval task, with a scenery in which a system knows the set of documents to be searched, but
cannot anticipate the particular topic that will be investigated (i.e., topics are not known to the system in
advance).
      </p>
      <p>
        ImageCLEFmed 2005 extends the 2004 experiments with a larger database and more complex queries. The
database consists of images [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] from the Casimage (Radiology and pathology), MIR (Mallinckrodt Institute of
Radiology, nuclear medicine), PEIR (Pathology Education Instructional Resource, Pathology and radiology) and
PathoPIC (Pathology) datasets, with about 50,000 images in all. The collection also contains about 50,000
annotations in XML format. While the majority are written in English (over 40,000), there is a significant
number in French (over 1,800) and German (over 7,800), and a few cases with no annotation at all. The quality
of the texts is variable between collections and even within the same collection.
      </p>
      <p>Query tasks have been formulated with example images and a short textual description explaining the research
goal. The task organizers provide a list of topic statements in English, French and German, and a collection of
images for each topic. Normally one or two example images for the desired result are supplied. One query also
contains a negative example as a test. The goal of ImageCLEFmed is to retrieve as many relevant images as
possible from the given visual and multilingual topics.</p>
      <p>
        The task organizers have also made available results from a state-of-the-art image retrieval system (medGIFT)
and a state-of-the-art text engine (Lucene [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]).
      </p>
      <p>The next section is devoted to the description of the different experiments which were carried out.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Description of experiments</title>
      <p>We focused our experiments to fully automatic retrieval, avoiding any manual feedback, and submitted runs both
using only visual features for retrieval (content-based retrieval) and also runs using visual features and text
(combination of content-based and text-based retrieval).</p>
      <p>
        To isolate from the content-based retrieval part of the process, we resorted to GIFT (GNU Image Finding Tool)
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a publicly available content-based image retrieval system which was developed under the GNU license and
allows to perform query by example on images, using an image as the starting point for the search process. GIFT
relies entirely on the image contents and thus it doesn’t require the collection to be annotated. It also provides a
mechanism to improve query results by relevance feedback.
      </p>
      <p>Our approach is based on the multidisciplinary combination of GIFT content-based searches with text-based
retrieval techniques. Our system consists of three parts: the content-based retrieval component (mainly GIFT),
the text-based search engine and the merging component, which combines the results from the others to provide
the final results.</p>
      <p>We finally submitted 13 different runs to be evaluated by the task coordinators, which are explained in the
following section.</p>
      <sec id="sec-3-1">
        <title>Content-Based Retrieval</title>
        <sec id="sec-3-1-1">
          <title>Without feedback</title>
          <p>This experiment consists on a content-based-only retrieval using GIFT. Initially the complete image database
was indexed in a single collection using GIFT, down-scaling each image to 32x32 pixels. For each
ImageCLEFmed query, a visual query is made up of all the images contained in the ImageCLEFmed query.
Then, this visual query is introduced into the system to obtain the list of more relevant images (i.e., images
which are more similar to those included in the visual query), along with the corresponding relevance values..
Although different search algorithms can be integrated as plug-ins in GIFT, only the provided separate
normalisation algorithm has been used in our experiments.</p>
          <p>There is only one submission, with “mirabase.qtop” as its run identifier.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>With feedback</title>
          <p>These experiments are similar to the preceding one, also a content-based-only retrieval using GIFT, but
incorporating relevance feedback. Each visual query is introduced into the system to obtain the list of images
which are more similar to the visual query. Then the top N results are added to the original visual query to build
a new visual query which is again introduced into the system to obtain the final list of results. In addition, GIFT
allows to build a weighted visual query in which a relevance value may be associated to each included image.
There are 3 submissions:
•
•
•
mirarf5.qtop
This run takes the 5 most relevant images for feedback, each one with a value of 1 for its relevance in the
visual query. The relevance in the visual query for the original images remains 1.
mirarf5.1.qtop
The same as mirarf5.qtop but using a value of 0.5 for the relevance in query of feedback images. The
relevance in query for the original images remains 1.
mirarf5.2.qtop</p>
          <p>The same as mirarf5.qtop but using a value of 0.5 for the relevance in query of the original images.
Finally, the different content-based runs are shown in Table 1.</p>
          <p>
            First all the case annotations are indexed using a text-based retrieval engine (explained later). Natural language
processing techniques are applied before indexing. An adhoc language-specific (for English, German and
French) parser is used to identify different classes of alphanumerical tokens such as dates, proper nouns,
acronyms, etc., as well as recognising common compound words. Text is tokenized, stemmed [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ][
            <xref ref-type="bibr" rid="ref13">12</xref>
            ] and stop
word filtered (for the three languages). Only one index is created, combining keywords in the three different
languages.
          </p>
          <p>
            Two different text-based retrieval engines were used. One was Lucene [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], with the results provided by the task
organizers. The other engine was KSite [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], fully developed by DAEDALUS, which offers the possibility to use
a probabilistic (BM25) model or a vector space model for the indexing strategy. Only the probabilistic model
was used in our experiments.
          </p>
          <p>The combination strategy consists on reordering the results from the content-based retrieval using a text-based
retrieval. For each ImageCLEFmed query, a multilingual textual query is build with the English, German and
French queries (first processing each one with the language depending parser and then concatenating the three
lists), and executed in the search engine to obtain the list of top-1000 cases which are more relevant to the textual
query.</p>
          <p>The list of relevant images from the content-based retrieval is reordered, moving to the beginning of the list
those images which belong to a case that is in the list of top-1000 cases. The rest of the images remain in the end
of the list.</p>
          <p>There are 10 different submissions:
•
•
•
mirabasefil.qtop, mirarf5fil.qtop, mirarf5.1fil.qtop, mirarf5.2fil.qtop
These runs consist on the combination as previously described of content-based-only runs with the
textbased retrieval obtained with KSite.
mirabasefil2.qtop, mirarf5fil2.qtop, mirarf5.1fil2.qtop, mirarf5.2fil2.qtop
The same experiment, but using Lucene.</p>
          <p>Other runs
Two other experiments were developed to test if there was any difference in results when using our own
content-based GIFT index or using the medGIFT results provided by the task organizers. So, medGIFT was
used as the starting point and then the same combination method as described before was applied.
o
o
mirabase2fil.qtop
medGIFT results filtered with text-based KSite results
mirabase2fil2.qtop
medGIFT results filtered with Lucene results
Finally, the different mixed retrieval runs are shown in Table 2.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>Relevance assessments have been performed by experienced medical students and medical doctors at OHSU and
the University hospitals of Geneva. Submissions from all groups are used to create image pools, which are
judged for relevance by assessors, based on using a ternary classification scheme: (1) relevant, (2) partially
relevant and (3) not relevant. The aim of the ternary scheme is to help assessors in making their relevance
judgments more accurate (e.g., an image is definitely relevant in some way, but maybe the query object is not
directly in the foreground; it is therefore considered partially relevant).</p>
      <p>The pools are assessed and the end result is a set of relevance assessments called qrels, which are then used to
evaluate system performance and compare submissions from different groups.</p>
      <sec id="sec-4-1">
        <title>Content-Based runs</title>
        <p>The results of the content-based runs are shown in the next table, ordered by its mean average precision value.
As shown in Table 3, the best result was obtained with the base experiment, which means that relevance
feedback has failed to improve the results (neither to worsen them). This may be due to an incorrect choice of the
parameters, but this has to be further studied.</p>
        <p>Apart from MIRACLE, other 8 groups participated in this year's evaluation in the content-based-only runs. Table
4 compares each group’s best submission.
Only one group is above us in the group ranking, although their average precision is much better than ours. Our
pragmatic approach using a “standard” publicly available content-based retrieval engine such as GIFT has
proved to be a better approach than other presumably more complex techniques. We still have to test if another
selection of indexing parameters (different from image down-scaling to 32x32 pixels and separate normalization
algorithm) may provide better results.
The results of the content-based and text-based mixed retrieval runs are shown in Table 5, ordered by its mean
average precision value. In this case, using relevance feedback provides slightly better precision values.
Considering the best runs, the optimum choice seems to be to assume 1.0 for the relevance of the top 5 results
and reduce the relevance of the images in the original query.</p>
        <p>It is interesting to observe that the worst combination is to take both results provided by the task organizers
(content-based medGIFT results and text-based Lucene results), with a performance decrease of 15%.
Comparing content-based runs with the mixed runs, Table 6 shows that the combination of both types of
retrieval offers better performance and even the worst mixed run is better than the best content-based only run.
This actually proves that text-based image retrieval can be used to improve the content-based only retrieval, with
much superior performance.
Apart from MIRACLE, other 6 groups participated in this year's evaluation in the content-based and text-based
runs. Table 7 shows the results for each group’s best submission.
In this case, our position in the table shows that the submissions from other groups clearly surpassed our results.
Anyway, these results are very satisfying for us, considering that we are not a group with expertise in image
analysis research.</p>
        <p>It is also interesting to note that most groups managed to improve their results with mixed approaches over the
content-based only runs. This is especially visible for the NCTU group, with an improvement from 0.06 to 0.23
(+355%) in the average precision.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions and Future Work</title>
      <p>Our main interest is not in experiments where only image content is used in the retrieval process. Instead, our
challenge was to test whether the text-based image retrieval could improve the analysis of the content of the
image, or vice versa. Results show that this hypothesis was right. Our combination of a “black-box” search using
a publicly accessible content-based retrieval engine with a text-based search has turned to provide comparable
results to other presumably “more complex” techniques. This simplicity may be a good starting point for the
implementation of a real system.</p>
      <p>We think that there still may be some space for improvement with a more careful study of the parameters for the
relevance feedback and the combination strategy.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This work has been partially supported by the Spanish R+D National Plan, by means of the project RIMMEL
(Multilingual and Multimedia Information Retrieval, and its Evaluation), TIN2004-07588-C03-01 and also by
the European Union with the funding of NEDINE project in the e-Content programme.</p>
      <p>Special mention to our colleagues of the MIRACLE team should be done (in alphabetical order): Ana María
García-Serrano, Ana González-Ledesma, José José Mª Guirao-Miras, Sara Lana-Serrano, Paloma
MartínezFernández, Ángel Martínez-González, Antonio Moreno-Sandoval and César de Pablo Sánchez.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] GIFT: The GNU Image-Finding Tool</article-title>
          . On line http://www.gnu.org/software/gift/ [Visited 18/07/2005]
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Goñi-Menoyo</surname>
          </string-name>
          , José M; González, José C.;
          <string-name>
            <surname>Martínez-Fernández</surname>
          </string-name>
          , José L.; and
          <string-name>
            <surname>Villena</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>MIRACLE's Hybrid Approach to Bilingual and Monolingual Information Retrieval</article-title>
          .
          <article-title>CLEF 2004 proceedings</article-title>
          (Peters,
          <string-name>
            <surname>C.</surname>
          </string-name>
          et al.,
          <source>Eds.). Lecture Notes in Computer Science</source>
          , vol.
          <volume>3491</volume>
          , pp.
          <fpage>188</fpage>
          -
          <lpage>199</lpage>
          . Springer,
          <year>2005</year>
          (to appear).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Goñi-Menoyo</surname>
          </string-name>
          , José M.;
          <string-name>
            <surname>González</surname>
          </string-name>
          , José C.;
          <string-name>
            <surname>Martínez-Fernández</surname>
          </string-name>
          , José L.;
          <string-name>
            <surname>Villena-Román</surname>
          </string-name>
          , Julio; GarcíaSerrano, Ana; Martínez-Fernández, Paloma; de Pablo-Sánchez,
          <article-title>César;</article-title>
          and
          <string-name>
            <surname>Alonso-Sánchez</surname>
          </string-name>
          ,
          <article-title>Javier. MIRACLE's hybrid approach to bilingual and monolingual Information Retrieval</article-title>
          .
          <source>Working Notes for the CLEF 2004 Workshop (Carol Peters and Francesca Borri, Eds.)</source>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>150</lpage>
          . Bath, United Kingdom,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Goodrum</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Image Information Retrieval: An Overview of Current Research</article-title>
          .
          <source>Informing Science</source>
          , Vol
          <volume>3</volume>
          (
          <issue>2</issue>
          ):
          <fpage>63</fpage>
          -
          <lpage>66</lpage>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] IRMA project: Image Retrieval in Medical Applications</article-title>
          . On line http://www.irma-project.
          <source>org/ [Visited</source>
          <volume>18</volume>
          /07/2005]
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6] KSite [Agente Corporativo]. On line http://www.daedalus.es/ProdKSiteAC-E.php.
          <source>[Visited</source>
          <volume>13</volume>
          /07/2005]
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Lucene</surname>
          </string-name>
          . On line http://lucene.apache.org/.
          <source>[Visited</source>
          <volume>13</volume>
          /07/2005] Martínez-Fernández, José L.;
          <string-name>
            <surname>García-Serrano</surname>
            , Ana; Villena,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Méndez-Sáez</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ;
          <article-title>MIRACLE approach to ImageCLEF 2004: merging textual and content-based Image Retrieval</article-title>
          .
          <article-title>CLEF 2004 proceedings</article-title>
          (Peters,
          <string-name>
            <surname>C.</surname>
          </string-name>
          et al.,
          <source>Eds.). Lecture Notes in Computer Science</source>
          , vol.
          <volume>3491</volume>
          . Springer,
          <year>2005</year>
          (to appear).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Martínez</surname>
          </string-name>
          , José L.;
          <string-name>
            <surname>Villena</surname>
          </string-name>
          , Julio; Fombella, Jorge; G. Serrano, Ana; Martínez, Paloma; Goñi, José M.; and González, José C.
          <article-title>MIRACLE Approaches to Multilingual Information Retrieval: A Baseline for Future Research. Comparative Evaluation of Multilingual Information Access Systems (Peters, C; Gonzalo</article-title>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          ; Brascher,
          <string-name>
            <surname>M.</surname>
          </string-name>
          ; and Kluck, M., Eds.).
          <source>Lecture Notes in Computer Science</source>
          , vol.
          <volume>3237</volume>
          , pp.
          <fpage>210</fpage>
          -
          <lpage>219</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Springer</surname>
          </string-name>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Martínez</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Villena-Román</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fombella</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>García-Serrano</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ruiz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Martínez</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Goñi</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>González</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          (Carol Peters, Ed.):
          <article-title>Evaluation of MIRACLE approach results for CLEF 2003</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>Working Notes for the CLEF 2003 Workshop</source>
          ,
          <fpage>21</fpage>
          -
          <lpage>22</lpage>
          August, Trondheim, Norway.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Porter</surname>
            ,
            <given-names>Martin.</given-names>
          </string-name>
          <article-title>Snowball stemmers and resources page</article-title>
          . On line http://www.snowball.
          <source>tartarus.org. [Visited</source>
          <volume>13</volume>
          /07/2005]
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12] University of Neuchatel.
          <article-title>Page of resources for CLEF (Stopwords, transliteration</article-title>
          , stemmers …). On line http://www.unine.ch/info/clef/.
          <source>[Visited</source>
          <volume>13</volume>
          /07/2005]
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Villena</surname>
          </string-name>
          , Julio; Martínez, José L.;
          <string-name>
            <surname>Fombella</surname>
          </string-name>
          , Jorge; G. Serrano, Ana; Ruiz, Alberto; Martínez, Paloma; Goñi, José M.; and González, José C.
          <article-title>Image Retrieval: The MIRACLE Approach</article-title>
          .
          <article-title>Comparative Evaluation of Multilingual Information Access Systems (Peters, C; Gonzalo</article-title>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          ; Brascher,
          <string-name>
            <surname>M.</surname>
          </string-name>
          ; and Kluck, M., Eds.).
          <source>Lecture Notes in Computer Science</source>
          , vol.
          <volume>3237</volume>
          , pp.
          <fpage>621</fpage>
          -
          <lpage>630</lpage>
          . Springer,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Villena-Román</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Martínez</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fombella</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>García-Serrano</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ruiz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Martínez</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Goñi</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>González</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          (Carol Peters, Ed.);
          <article-title>MIRACLE results for ImageCLEF 2003</article-title>
          .
          <source>Working Notes for the CLEF 2003 Workshop</source>
          ,
          <fpage>21</fpage>
          -
          <lpage>22</lpage>
          August, Trondheim, Norway.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>