<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Content-Based Image Retrieval: From Computer Generated Features to Semantic Descriptions of Liver CT Scans</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Assaf B. Spanier</string-name>
          <email>assaf.spanier@mail.huji.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Leo Joskowicz</string-name>
          <email>leo.josko@mail.huji.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>The Rachel and Selim Benin School of Computer Science and Engineering The Hebrew University of Jerusalem</institution>
          ,
          <country country="IL">Israel</country>
        </aff>
      </contrib-group>
      <fpage>438</fpage>
      <lpage>447</lpage>
      <abstract>
        <p>The rapid increase of CT scans and the limited number of radiologists present a unique opportunity for computer-based radiological Content-Based Image Retrieval (CBIR) systems. However, the current structure of the clinical diagnosis reports presents substantial variability, which signi cantly hampers the creation of e ective CBIR systems. Researchers are currently looking for ways of standardizing the reports structure, e.g., by introducing uniform User Express (UsE) annotations and by automating the extraction of UsE annotations with Computer Generated (CoG) features. This paper presents an experimental evaluation of the derivation of UsE annotations from CoG features with a classi er that estimates each UsE annotation from the input CoG features. We used the datasets of the ImageCLEF-Liver CT Annotation challenge: 50 training and 10 testing CT scans with liver and liver lesion annotations. Our experimental results on the ImageCLEF-Liver CT Annotation challenge exhibit a completeness level of 95% and accuracy of 91% for 10 unseen cases. This is the second best result obtained in the Liver CT Annotation challenge and only 1% away from the rst place.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>About 68 million CT scans are performed in the USA each year. Clinicians
are struggling under the burden of diagnosis and follow-up of such an immense
amount of scans. This phenomenon gave rise to a plethora of methods to improve
and assist clinicians with the diagnosis process.</p>
      <p>
        Content based image retrieval (CBIR) is a growing and popular research
topic [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The goal of CBIR is to assist physicians with the diagnosis of tumors
or other pathologies by nding similar cases to the case at hand. Therefore,
CBIR requires e cient search capabilities in a huge database of medical images.
The matching criteria are based on image properties and features extracted from
image and pathology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and on searching in the clinical reports database.
      </p>
      <p>Besides the known problem with diagnosis and follow-up of this huge number
of scans, there is substantial variability in the structure of the clinical reports
provided by the clinicians for each case. This variability hampers the ability to
establish an e cient and consistent CBIR system, as uniform reports structure
is a major need for such an application.</p>
      <p>
        The task of standardization the clinical reports for liver and liver lesions has
recently been proposed by Kokciyan et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The ONLIRA ontology constitutes
a standard that is used to generate multiple-choice User Express (UsE)
annotations [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] consisting of features that clinically characterize the liver and the liver
lesion. Note that the UsE annotations are provided by the radiologist, as they
cannot be extracted automatically from the image itself. However, the image
descriptors, called Computer Generated (CoG) features, can be automatically
derived from the image with image processing algorithms [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        The goal of this work is therefore to use the CoG features to automatically
generate the UsE annotations. A major part of this work deals with designing
and building a machine learning algorithm that link CoG features to UsE
annotations. Training datasets provided by the ImageCLEF-Liver CT challenge
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] which is part of the ImageCLEF-2014 evaluation campaign [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Additional
contributions of this work consist of extending the CoG available features and
optimal selection of the most relevant ones to the liver task. Experimental
results on the ImageCLEF-Liver CT Annotation challenge exhibit estimation of
UsE annotations at a completeness level of 95% and accuracy of 91% for 10
unseen cases. This is the second best result obtained in the Liver CT Annotation
challenge [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] which is part of the ImageCLEF-2014 evaluation campaign [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and
only 1% away from the rst place.
2
      </p>
      <p>
        Method
A major part of this work would deal with developing a machine learning
algorithm that would best link CoG features to UsE annotations based on training
data-sets. Developing a machine learning algorithm involves four main steps [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
(1) Data collection; (2) Feature extraction; (3) Model selection/Fitting
classier parameters and; (4) Training the selected model. A diagram illustrating the
phases of the process is shown in Fig 1. We describe each step in detail below.
2.1
      </p>
      <p>Data Collection
The input of our algorithm is a set of 50 datasets provided by the imageCLEF
2014 Liver Annotation Task data and has been collected by CaReRa Project
(TUBITAK Grant 110E264), Bogazici University, EE Dept., Istanbul, Turkey
(www.vavlab.ee.boun.edu.tr). Each dataset includes:
1. A CT scan that consists the liver region and liver tumors
2. A segmentation of the liver
3. The lesion's bounding box
4. A set of 60 Computer Generated features (CoG) features
5. A set of 73 User Express (UsE) annotations
Fig. 1. Four main steps in designing and building a machine learning algorithm for
the estimation of UsE from CoG and CT images: (a) Data collection: the input to
our system consists of CT scans and CoG features and UsE annotation ; (b) Feature
extraction: only the most informative CoG features are selected; (c) Model selection:
Estimating the most appropriate model for the UsE annotations from CoG features.
The model is selected after testing a variety of prediction models and; (d) Training the
selected model: The selected models parameters are trained using all 50 cases.</p>
      <p>The CoG features can be divided to global image descriptors and pathology
descriptors. The global image descriptors cover the basic and liver-wide global
statistical properties, such as the mean and variance of the gray-level values,
and the liver volume. They are extracted directly from the CT scans and the
associated segmentation. The pathology descriptors are computed for each liver
lesion. They re ect ner levels of visual information related to individual lesions.</p>
      <p>UsE annotations are also divided into global and pathology descriptors.
Global descriptors are divided into Liver and Vessel groups. These two groups
include annotations about the liver itself and its hepatic vasculature.
Pathology descriptors include two groups: Lesion and Lesion Component and contain
annotations about the selected lesion in the liver.</p>
      <p>
        A complete list of all UsE and CoG features with their associated descriptor
type can be found at [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Representative examples of CoG and UsE are shown
in Table 2.1 and Table 2.1 respectively.
In this step we aim at extracting the optimal set of CoG features for the
estimation of each UsE annotation. Note that due to the diversity of lesions between
and within patients, estimating pathology descriptors is a much more challenging
task than estimating global descriptors. Thus, our analysis includes developing
two distinct classi cation models: one for the global CoG features and one for
the CoG pathology descriptors.
      </p>
      <p>First, we reduce problem dimensionality by omitting 21 high dimension
features from the 60 provided CoG (e.g. FourierDescriptors,
BoundaryScaleHistogram, BoundaryWindowHistogram etc.). Thus, our analysis includes only CoG
features with scalar values. This results in 39 features divided between global
and pathology related features.</p>
      <p>The 18 global CoG features are: LiverVolume , LiverMean, LiverVariance,
VesselRatio, VesselVolume, MinLesionVolume, MaxLesionVolume, LesionRatio,
AllLesionsMean, AllLesionsVariance, AllLesionsSkewness, AllLesionsKurtosis,
AllLesionsEnergy, AllLesionsSmoothness, AllLesionsAbcssia, AllLesionsropy,
AllLesionsThreshold, NumberofLesions</p>
      <p>The 21 pathology related CoG features are: LesionMean, LesionVariance,
LesionSkewness, LesionKurtosis, Lesionnergy, Lesionmoothness, Lesionbcssia,
LesionEntropy, LesionThreshold, Lesion2VesselMinDistance,
Lesion2VesselTouchRatio, VesselTotalRatio, VesselLesionRatio, Volume,
SurfaceArea, MaxExtent, AspectRatio, Sphericity, Compactness, Convexity,
Solidity</p>
      <p>We added 9 features to the 21 pathology features to describe the statistics
of the lesion itself. The new features are derived from a re ned segmentation of
the liver obtained by thresholding the given lesion bounding box with its mean
gray level followed by morphological operations. The added CoG features are:
1. The average gray level intensity values of the healthy part of the liver.</p>
      <p>(LiverGrayMean)
2. The standard deviation of gray level intensity values of the healthy part of
the liver. (LiverGrayStd)
3. The average gray level intensity values of the lesion. (LesionGrayMean)
4. The standard deviation of gray level intensity values of the lesion.
(LesionGrayStd)
5. The lesion's contour mean gray levels. (LesionBounderyGrayMean)
6. The standard deviation of the lesion's contour gray levels.</p>
      <p>(LesionBounderyGrayStd)
7. The average gray level di erence between the healthy part of the liver and
the lesion. (LesionLiverGrayDi )
8. The average gray level di erence between the healthy part of the liver and
the lesion' contour. (BounderyLiverGrayDi )
9. The average gray level di erence between the the lesion and its contour.
(lesionBounderyGrayDi )</p>
      <p>The result is a modi ed CoG list with 18 global image descriptors and 30
pathology descriptors.
2.3</p>
      <p>Model Selection
In this section the classi cation algorithm to be evaluated is presented:
Predictive models can be characterized by two properties: parametric/non-parametric
and generative/discriminative. Parametric models have a xed number of
parameters and have the advantage of often being faster to use. However, they
tend to rely on stronger assumptions about the nature of the data distributions.
In non-parametric classi ers, the number of parameters grows with the size of
the training data. Non-parametric classi ers are more exible but are often
computationally intractable for large datasets.</p>
      <p>
        As to the generative/discriminative property, the main focus of generative
generative models, is not the classi cation task but to correctly model the
probabilistic model. They are called generative since sampling can generate synthetic
data points. Discriminative models, however, do not attempt to model the
underlying probability distributions, but rather focus on the given task, i.e. the
classi cation itself. Therefore, they may achieve better performance in terms
of overall accuracy of the classi cation task. In general, when the probabilistic
distribution assumptions made are correct, the generative model requires less
training data than discriminative methods to provide the same performance,
but if the probabilistic assumptions are incorrect, discriminative methods will
do better [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Table 2.3 shows the characteristics of each classi er.
      </p>
      <p>
        For real world datasets, so far there is no theoretically correct, general
criterion for choosing between the di erent models. Therefore, we examined four
classi ers, representative of the 4 di erent families of models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]: K-Nearest
Neighbors (KNN), Linear Discriminant Analysis (LDA), Logistic Regression (LR),
Support Vector Machine (SVM). Note that for each UsE annotation, the
outcome of each predictive model consists of classi cation and subset selection of
the optimal CoG features. Therefore, the selection of the CoG features is unique
for each model.
      </p>
      <p>
        We used the Python scikit-Learn Machine learning tool [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] to examine the
four selected classi ers. For each UsE descriptor, the best predicting classi er and
its features was based on leave-one-out cross validation with exhaustive search
{ i.e. systematically enumerating all possible Combinations CoG features. Note
that since we develop two distinct classication models, one for the 18 global CoG
features and one for the 30 CoG pathology descriptors, the exhaustive search
was performed for each CoG group separately. Three UsE features (Cluster size,
Lobe and Segment) were estimated from the image itself and were not part of
the learning process (Section 2.6).
      </p>
      <p>
        For simplicity, each model was tested with a set of default parameters as
de ne by scikit-learn package [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The parameters for each model are:
{ KNN: K=5, Distance: euclidean distance with no threshold for shrinking.
{ LDA: Euclidean distance, regularization strength of 1.0.
{ LR: L2 penalty, regularization strength of 1.0, tolerance for stopping criteria
is 0.0001.
{ SVM: Penalty parameter of 1.0, RBF kernel with degree of 3 and gamma of
0, tolerance for stopping criteria is 0.0001.
2.4
      </p>
      <p>Training
Once the classi er which produced the highest classi cation was found in the
previous step, we trained it using all 50 cases. As a result, for each UsE, we
obtain a trained model, consisting of classi er along with optimized sets of CoG
features (i.e. the selected features)
2.5</p>
      <p>Evaluation
The evaluating phase of the challenge consists of the estimation of UsE
annotation from the given CoG features and the images. Unlike the training phase,
UsE annotation are not given here to examine our accuracy.</p>
      <p>To apply the resulting classi er in the testing phase on an unseen dataset,
we rst extract and extend the CoG features according to a scheme described in
Section 2.3. Then, for each test case, we apply the prediction model { the one
with the highest score { according to the training phase results. The result is a
UsE annotation for each unseen case.
2.6</p>
      <p>Estimation of the Lesion Lobe, Lesion Segment, and Cluster
Size
As noted in Section 2.3, the Cluster Size (i.e. the number of lesions inside the
lesion bounding box), the Lesion Lobe and the Lesion Segment containing the
lesion were not part of the general learning process but were rather estimated
from the image itself. The estimation of these elds was performed as follows:</p>
      <p>For the Lesion Lobe: we measure the center of the lesion and the liver. The
lesion lobe is estimated as the right lobe if the lesion center is on the right part
of the liver, and as the left lobe if the lesion center is on the left part of the liver.
In case they were overlapping we estimated it to be the Caudate Lobe.</p>
      <p>The Lesion Segment is estimated as follows: If at a previous stage the
estimation was that the lesion is on the right lobe, we assessed that the lesion is
located on the fourth segment. Alternatively, if at the previous stage the
estimation was that the lesion is on the left lobe, we analyzed whether the lesion is
located above or below the center of the liver. If it is located above the center of
the liver, we assessed that the lesion is in segment 5-6; and if it is located below
the center of the liver - we assessed that the lesion is on segment 7-8.</p>
      <p>For the Cluster Size: We de ne the Cluster Size to the number of lesion { the
value listed in the CoG eld NumberofLesions. Except in cases when the value
listed in number of lesion is higher than 6, and then we de ne that the number
of lesions to be 6. For the Cluster Size: We de ne the Cluster Size to the number
of lesion { the value listed in the CoG eld NumberofLesions. Except in cases
when the value listed in number of lesion is higher than 6, and then we de ne
that the number of lesions to be 6.
Experimental results of applying our method to the ImageCLEF-Liver CT
Annotation challenge datasets results with estimation of UsE annotations at a
completeness level of 95% and accuracy of 91% for 10 unseen cases.</p>
      <p>Training and test results for each of the classi ers are shown in Table 2.6.
Due to the simplicity of estimating the global UsE annotations, we present their
result per group. A detailed presentation is provided for the pathology related
features, which is indeed a much more challenging task. It can be seen that all
classi ers successfully estimated the global features.</p>
      <p>As mentioned, three additional features were estimated from the images and
were not part of the learning process. These features are: ClusterSize,
LesionLobe, LesionSegment and their accuracy on the training datasets was 0.75, 0.9,
0.7 respectively. The completeness level of our method is 0.95 due to omitting 3
UsE annotations from the analysis: LesionComposition, Shape and the
MargenType.</p>
      <p>The optimized sets of CoG features (i.e. the selected features) that were
obtained by the model with the highest score after the leave-one-out procedure
are shown in Table 5. Note that the set of added features (Section 2.2) were
indeed selected by the model, which proves their necessity.
Lesion-Lesion</p>
      <p>Contrast Pattern</p>
      <p>KNN
Lesion-Lesion</p>
      <p>Other
Lesion-Area
Lesion-Area
Lesion-Area
Lesion-Area</p>
      <p>Is Contrasted
Density
Density Type
Is Peripherical</p>
      <p>Localized
Lesion-Area</p>
      <p>Is Central Localized KNN
Lesion-Area Other
Lesion-Component All</p>
      <p>Any
Any
LDA
Any
LDA
LDA
LDA
KNN
Any
Any</p>
      <p>Selected CoG</p>
      <p>Features
All
All
BounderyLiverGrayDi ,
LesionGrayStd, Entropy,
SurfaceArea
LesionGrayMean,
LesionBounderyGrayMean,
BounderyLiverGrayDi ,
LesionBounderyGrayStd,
All
LesionLiverGrayDi , Solidity,
Entropy, Kurtosis,
lesionBoundryGrayMean
LesionBounderyGrayMean,
Solidity, LesionGrayStd
LiverGrayStd,
LesionGrayStd,
lesionBounderyGrayStd
LiverGrayStd,
LesionGrayStd,
LesionBounderyGrayStd
All
All
We have presented an approach to estimate UsE annotations from CoG features
and associated CT scans. We extended the CoG features with 9 additional
features to enhance the learning process. In the ImageCLEF-Liver CT Annotation
challenge. Our approach provides an average accuracy level of 91% with
completeness level of 95% when applied on 10 unseen test cases. This work provides
reliable estimation of uniform clinical report from imaging features and therefore
it constitutes another step toward automatic CBIR system by enabling e cient
search in clinical reports. Future work consists of examining an additional set
of classi ers and extending the completeness of our algorithm to estimate the
full UsE annotations set and values (e.g. estimating segment 1-3 of the Lesion
Segment feature )</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kokciyan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turkay</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uskudarli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yolum</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bakir</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acar</surname>
            ,
            <given-names>B. . Semantic</given-names>
          </string-name>
          <article-title>Description of Liver CT Images: An Ontological Approach</article-title>
          .
          <source>IEEE Journal of Biomedical and Health Informatics</source>
          , vol.
          <volume>2194</volume>
          (
          <year>2014</year>
          ):
          <fpage>11</fpage>
          , .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Duda, Richard O.,
          <string-name>
            <surname>Peter</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Hart</surname>
            , and
            <given-names>David G.</given-names>
          </string-name>
          <string-name>
            <surname>Stork</surname>
          </string-name>
          .
          <article-title>"Pattern classi cation</article-title>
          .
          <source>"</source>
          New York: John Wiley, Section
          <volume>10</volume>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Murphy</surname>
          </string-name>
          , Kevin P.
          <article-title>Machine learning: a probabilistic perspective</article-title>
          . MIT Press, (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Jordan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>On discriminative vs. generative classi ers: A comparison of logistic regression and naive bayes</article-title>
          .
          <source>Advances in neural information processing systems</source>
          ,
          <volume>14</volume>
          ,
          <fpage>841</fpage>
          . (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderplas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perrot</surname>
          </string-name>
          , and Duchesnay E.
          <article-title>Scikit-learn: Machine Learning in Python</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          ,
          <volume>12</volume>
          :
          <fpage>28252830</fpage>
          , (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Marvasti</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , Kokciyan, N., Turkay, R.,
          <string-name>
            <surname>Yaz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yolum</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Uskudarl , S. and
          <string-name>
            <surname>Acar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>imageCLEF Liver CT Image Annotation Task 2014 In: CLEF 2014 Evaluation Labs</article-title>
          and Workshop, Online Working Notes (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Caputo</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Martinez-Gomez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patricia</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marvasti</surname>
            , N., Uskudarl ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paredes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cazorla</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Varea</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Morell</surname>
          </string-name>
          , V.:
          <article-title>ImageCLEF 2014: Overview and analysis of the results</article-title>
          . Springer Berlin Heidelberg. (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Akgl</surname>
            ,
            <given-names>C. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rubin</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Napel</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beaulieu</surname>
            ,
            <given-names>C. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greenspan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Content-based image retrieval in radiology: current status and future directions</article-title>
          .
          <source>Journal of Digital Imaging</source>
          ,
          <volume>24</volume>
          (
          <issue>2</issue>
          ),
          <fpage>208</fpage>
          -
          <lpage>222</lpage>
          . (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Barzegar</given-names>
            <surname>Marvasti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Akgl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. B.</given-names>
            ,
            <surname>Acar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Kkciyan</surname>
          </string-name>
          , N.,
          <string-name>
            <surname>skdarl</surname>
          </string-name>
          , S.,
          <string-name>
            <surname>Yolum</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Trkay</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bakr</surname>
            ,
            <given-names>B.:</given-names>
          </string-name>
          <article-title>Clinical experience sharing by similar case retrieval</article-title>
          .
          <source>In: Proceedings of the 1st ACM international workshop on Multimedia indexing and information retrieval for healthcare</source>
          (pp.
          <fpage>67</fpage>
          -
          <lpage>74</lpage>
          ). ACM. (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>