<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Content-based Image Retrieval by Ontology-based Ob ject Recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jean-Pierre Schober</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thorsten Hermes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Otthein Herzog</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>hermes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>herzog}@tzi.de</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>TZI Center for Computing Technology Universitätsallee</institution>
          <addr-line>21-23 28359 Bremen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The main disadvantage of image retrieval systems is their lack of domain knowledge. Therefore a retrieval system has to focus on primitive features, as Eakins and Graham name them [3]. Due to the lack of background knowledge of the domain, the retrieval error rate is usually dissatisfying or the search options are limited to syntactic queries. Knowledgebased techniques allow for semantical searches filling the “semantical gap” [4]. In this paper we present a supervised learning system OntoPic, which is based on the well-known ontologies coded in DAML+OIL, for providing the domain knowledge. Combined with a DL reasoner for ontologies, the main target is to achieve a new level of result quality while allowing semantical searches. The main advantage of this approach is the usage of the reasoner as a classifier, enabling a dual use of the ontology. The same domain knowledge can be used for better object recognition, the basis for satisfying results, and a semantical search. Our work is applied to the domain of landscape images.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        mainly two different approaches [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The first approach offers searches for local or
global image features, for example color or texture. The other approach follows
the idea of adding keywords to the images as an annotation. Humans perform
very well at annotating images, since they have normally a large knowledge
about the domain the image belongs to. But besides the fact that it is a very
annoying task to index a large amount of images, humans tend to subjectively
annotate an image which invalidates this annotation effort for others [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Systems
that follow the second approach offer support for the manual annotation or
try to fully automate this process. The target is to minimize the subjectivity
of manual annotation by guiding the annotation process or to make human
assistance dispensable [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        According to Eakings and Graham [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] image retrieval can be categorized
into three levels: primitive, logical and abstract. The first retrieval approach
presented in the previous paragraph moves only to the primitive, syntactic level,
which is the lowest one. The second approach allows a search for logical objects
of the image, and therefore fulfills a requirement for the higher, semantic levels.
But as Eakins and Graham mentioned, this is not true content–based image
retrieval, if humans provide the content information manually. Beside of this,
the second approach is for most use cases1 the superior one. Only the presence
of annotated keywords allows for a so–called semantical search [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The main
advantage of the semantical search is the fact that the user does not need to
have a concrete idea of the image he is looking for. He only needs an idea of the
context the image should belong to.
      </p>
      <p>In this paper we present a supervised learning system, called OntoPic, which
provides an automated keyword annotation for images and a content–based
image retrieval on a semantical level.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        In this section we present some instances of image retrieval systems in
chronological order. Three of the first approaches in content–based image analysis
and retrieval in the early ’90s were Photobook [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the QBIC system (Query
by Image Content) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and the IRIS (Image Retrieval for Information
Systems) system [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. QBIC offers a model search by means of single images and
video sequences, considering only one data source in each case i.e., single images
or videos. Photobook provides a set of interactive tools including interactive
annotation capabilities for new images, for browsing and searching images and
sequences. While the design of the QBIC system focuses on the idea that
similarities between images should rather be defined exclusively by syntactical features,
1There are domains where a keyword indexing of images is not possible, due to their nature.
Trademark logos are an example for this kind of images.
      </p>
      <p>
        Photobook uses a category search based on text information associated to an
image and a direct comparison of the images by the computation of so–called
“Eigenimages” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. IRIS realized an automatic generation of content
description of an image for special domains [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The project resulted in the ImageMiner
system by IBM [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Breen et. al. promote an approach of integrating ontologies in the retrieval
process [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In contrast to our approach the ontology is not fully integrated, but
placed on top of the retrieval process. The main object classification is done by
a neural network.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Reasoning about Ontologies</title>
      <p>
        OntoPic has been developed at the Center for Computing Technologies at the
University of Bremen. It is integrated into the PictureFinder2 system which
provides the basic feature extraction and segmentation capabilities for OntoPic.
For the automated classification of image regions, ontologies are used to provide
the needed domain knowledge. For this use case it is necessary that the ontology
language provides reasoning support. OIL, respectively DAML+OIL, offers this
reasoning support by a well defined mapping to the Description Logic (DL)
SHIQ [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        In the last section of this paper we discuss the capabilities of OntoPic. First
we present the steps that are needed in OntoPic to extract content from a binary
image. Further information about RACER and the semantics of the notions used
throughout this paper is given in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3.1
      </p>
      <sec id="sec-3-1">
        <title>From Pixels to Meanings</title>
        <p>OntoPic consists of three parts: a supervised training, an analysis, and a retrieval
part. The first step to use OntoPic in an actual domain is the design of an
ontology. Afterwards, this ontology can be trained with images from the concrete
domain. When these steps are done, OntoPic is ready for use in that domain
and can automatically analyze and annotate images. Analysed images can then
be retrieved through queries by the users of the system.</p>
        <p>The first step in designing an ontology is to define the hierarchy of the domain
concepts. During the design of the ontology there are important design decision
to be made, which can have a major impact on later classification results. The
main point is to keep in mind, that the capabilities of the segmentation and
feature extraction limit the usefull specialization grade.</p>
        <p>2An online demo of the PictureFinder system is available under http://www.tzi.de/bv/
pfdemo/</p>
        <p>After a domain dependent ontology is drafted, it must be enriched,
respectively trained, for the usage in the concrete domain. The trainer can load the
ontology into OntoPic and train it by assigning ontology concepts to image
objects.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Training Phase</title>
        <p>
          After the trainer has assigned some images to the training set, he can either
start an automatic color–region boundary detection for the images, mark the
image boundaries directly by drawing into the image, or use a combination of
these methods. During this step, it is vital to have the correct boundaries of
the objects to avoid a false training of concepts. Therefore the trainer should
always control and correct the results of the automated boundary detection to
end up with a “semantic meaningful” segmentation [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. After the training is
completed, the ontology can be updated with the training results.
        </p>
        <p>In order to do so, several local features are extracted and used for forming new
concept axioms on color, texture, background membership, and spatial relations.
The shape of a region is not yet taken into account, because of the difficult
problems originating from the different viewpoints an image could have been
taken.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Feature Extraction</title>
        <p>The extracted features of a region are in general represented by continuous
values. For a classification these values have to be discretized. The advantage is
a smaller amount of concepts to be trained, as every trained concept instance
represents a set of concepts covered by the discretized feature range. The
disadvantage is the occurrence of overlapping in the feature space. OntoPic deals with
this overlapping by applying background knowledge and is capable of discarding
incoherent concept assignments, as described later.
3.3.1</p>
        <sec id="sec-3-3-1">
          <title>Color, Texture and Background Membership</title>
          <p>
            The color feature is based on the Color Naming System (cf. [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]). The color
name is acquired by a conversion from RGB to HSB3. A color name is either a
member of the set of chromatic or of achromatic color names. A chromatic color
name is a composition of a saturation and lighting prefix with a description for
the hue value, e.g., “very–light–vivid–green”.
          </p>
          <p>A texture is either of kind multiarea, homogeneous or speckled. A texture
of kind multiarea can be described additionally as rippled or hatched, and a
3The HSB color system is analogous the the HSV system, but provides a more intuitive
description of colors.
speckled texture can be hatched, too.</p>
          <p>The background membership is either true or false. As an indicator for this
membership, the intersection of a region with the image border is taken.</p>
          <p>Because of the features’ different discriminating power they are weighted
differently, too. The background membership for example, is the lowest weighted
feature. The assignment of weights is done in a postprocessing step during the
interpretation of the classification results.
3.3.2</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Spatial Relations</title>
          <p>Dealing with spatial relations in a two dimensional area is somewhat difficult
and can easily yield to false categorization of image regions. In the landscape
domain it is nearly impossible to find universally valid rules about spatial
relations between different concepts. This is due to the nature of a photography
which is in the worst case a reversed image. But these exceptions are negligible
and have to stand back against the advantages that arise from a rule base. In
the landscape domain the horizon is the line which divides sky elements from
other landscape objects. Knowledge like “water is never above sky” can be very
valuable and can help to avoid misclassifications.</p>
          <p>There is often a correlation between two concepts concerning a spatial
relation. As an example, an ocean is often right besides a beach and a lake besides
grassland. These are no universal rules but valuable evidences, which can
establish the difference between a right or wrong classification.</p>
          <p>The following spatial relations are taken into account: isAbove, isBelow and
liesBeneath.
3.4</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>Axiom–Building</title>
        <p>After extracting the region features they can be used to build concept axioms for
the manually trained concepts from the training set. The principle is to build a
mapping between the low–level features and the high–level concepts via the DL.
3.4.1</p>
        <sec id="sec-3-4-1">
          <title>Challenges and Solutions</title>
          <p>The main problem of classifying objects via a DL is the need for an exact match
with a formerly manually trained concept occurrence. For satisfying results, it
is necessary to train nearly all different occurrences of one concept. This leads
to the danger of strong overlappings in the feature space, an overspecification.
Another problem arises from the fact, that the results of the feature
extraction should be treated as uncertain until there are perfect feature extraction
algorithms.</p>
          <p>
            To avoid this problem, a degree of membership to a concept, a fuzzy logic
approach is the classical solution. Unfortunately, there are no existing reasoning
systems with the power of DL systems until now [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] which would cover this
approach.
          </p>
          <p>
            An approach to this problem is a pseudo–extension of the DL to a fuzzy logic
or—to be more specific—a reduction of the fuzzy logic for use inside a DL [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
The idea is to enrich the concept names with information about the degree of
membership, resulting in a concept that we call μ–concept. For example, the
μ–concept Tree≥0.5 is interpreted as an instance of the concept Tree with degree
c ≥ 0.5. A parsing and interpretation of these concept names allows for an
evaluation of the results. The logical relations between the different μ–concepts
have to be defined inside the ontology.
          </p>
          <p>In our approach we do not use numbers to enrich the concepts, but identifiers
for every feature. If a feature is the source for the belief that a region belongs
to a concept, the identifier of the feature is added to the concept name. With
this approach it is possible to classify image regions by a specific characteristic
that was never trained before.</p>
          <p>The three features color, texture and background are identified by the
characters C, T and B. A spatial relation is treated as a special feature as described in
the next section. For every trained concept CN it is necessary to auto–generate
the following rules, which define the logical coherences between the enriched
concept names4:</p>
          <p>CNCT
CNCB</p>
          <p>CNT B
CNCBT
.
=
.
=
.
=
.
=</p>
          <p>CNC u CNT
CNC u CNB
CNT u CNB</p>
          <p>CNC u CNB u CNT
3.4.2</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>Extending the Knowledge Base</title>
          <p>
            After the training process is finished, the Terminological Box (TBox)5 is
extended by the training results. For every image object with corresponding
feature identifiers F1...Fm, feature roles R1...Rm, feature values respectively role
fillers V1...Vm and an assigned concept CN , the following statements are added
to the TBox:
4We use the DL notation in this paper. More information about the semantics can be
found in [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
          <p>5The Terminological Box holds the general concept inclusions (GCIs). Together with the
extensional knowledge in the Assertional Box (ABox) it forms a knowledge base.</p>
          <p>The spatial relations of an object receive a special treatment. For every
region the spatial relations to its neighbours are determined. In detail, a match
for the spatial relation feature is given, if the region to be classified is in the
same spatial relation to a neighbour as a formerly trained one.
3.5</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>Classifying an Image</title>
        <p>For classifying a new image it has to be segmented into different image regions.
Subsequently, for every region an ABox individual is created. The extracted
region features are assigned via the proper role declarations to the individuals.
To classify the regions, it is only necessary to let the reasoner realize the ABox
and query for the individual direct types of the region instances. While parsing
the enriched concept names it is possible to weight the results. For example, the
concept WaterCT is preferred over SkyT as the direct type of an individual, i.e.
as the result of classification.
3.5.1</p>
        <sec id="sec-3-5-1">
          <title>Non–Concepts/Postprocessing</title>
          <p>In the previous section we described an approach to use a DL in combination with
a reasoner for classifying image objects. Now we want to take advantage of the
power that a reasoner gives us for the classification by using domain knowledge
about spatial relations between the objects to end up with a consistent image
classification.</p>
          <p>At this point there exists a new problem: If we simply define axioms like
“water is never above sky” we are likely to end up with inconsistent ABoxes.
However, an inconsistency in the ABox is not the information we need, we need
to know the cause of an inconsistency. Therefore, we introduce a new concept:
the non–concept. Considering the prior example: “water above sky is non–
water.” We get a consistent ABox and we know that there is an inconsistency
due to the presence of a non–concept and knowing the cause. An example for
simple non–concept definition is:</p>
          <p>Water u ∃isAbove.Sky v</p>
          <p>NonWater</p>
          <p>The existence of non–concept instances is interpreted as an inconsistency
which is then solved by OntoPic externally. OntoPic removes the assignment
from the individual, and it removes non–concept instantiations with the lowest
degree of membership first. This process is repeated until the result is
“consistent”, i.e. contains no non–concept instantiations.
3.6</p>
        </sec>
      </sec>
      <sec id="sec-3-6">
        <title>Retrieval</title>
        <p>The retrieval process is also supported by the ontology. Due to the fact of
the hierarchical organization of the ontology, it provides a thesaurus for user
queries. Furthermore, the ontology offers this hierarchy for the support of query
formulations. Additionally, the domain dependent–knowledge can be combined
to allow searching for scenes. For example, the knowledge base could hold the
information that a sky, a beach, and water forms a beach scene.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Example</title>
      <p>In this section we present an example to demonstrate the power of OntoPic.
We use an ontology containing 71 concepts. The concepts of this ontology were
trained with 75 images, leading to 450 assignments between concepts and image
regions. Figure 1 shows on the left a classified image prior to the coherence
check. On the right a graphical representation of the underlying ABox is shown.</p>
      <p>There are three misclassified regions. The eight other regions are correctly
classified. The misclassified region at the bottom has thirteen matches, all of
them with a low degree of membership. Therefore OntoPic discards this result
in the next step. The other two misclassified concepts have also multiple concept
assignments. Some of these assignments are correct, others not. But as shown
in figure 2, these wrong concept assignments are discarded during the coherence
check due to applied domain–knowledge.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>Ontologies are a powerful tool for describing domain knowledge. With the
mapping to a DL ontologies are very useful for various applications.</p>
      <p>
        We have shown that it is possible to use a DL during the classification process
and to benefit from the powerful reasoning capabilities RACER offers [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The
DL offers the opportunity to use background knowledge about a specific domain
and to raise the quality of the classification results.
      </p>
      <p>Several problems were addressed and solutions proposed. It has been shown,
that the capability to reason directly over a fuzzy logic would be of advantage
for future applications.</p>
      <p>Another feature of OntoPic that was not addressed in this paper is the
aggregation of classified objects. OntoPic is capable of aggregating different parts
of an object to by querying for the counterparts. This is possible due to the
powerful query language which was implemented in RACER recently.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Breen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          .
          <article-title>Ontology-based image classification using neural networks</article-title>
          .
          <source>In Proceedings of SPIE Internet Multimedia Management Systems</source>
          , III, pages
          <fpage>198</fpage>
          -
          <lpage>208</lpage>
          , Boston, MA, USA,
          <year>July 2002</year>
          . SPIE.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Eakins</surname>
          </string-name>
          .
          <article-title>Automatic image content retrieval - are we getting anywhere</article-title>
          .
          <source>In Proceedings of Third International Conference on Electronic Library and Visual Information Research</source>
          , pages
          <fpage>123</fpage>
          -
          <lpage>135</lpage>
          , May
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Eakins</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Graham</surname>
          </string-name>
          .
          <article-title>Content-based image retrieval: A report to the JISC technology applications programme</article-title>
          ,
          <year>January 1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V. N.</given-names>
            <surname>Gudivada</surname>
          </string-name>
          and
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          .
          <article-title>Content-based image retrieval systems</article-title>
          .
          <source>In Proceedings of the 1995 ACM 23rd annual conference on Computer science, number 28 in 9</source>
          , pages
          <fpage>18</fpage>
          -
          <lpage>22</lpage>
          , Nashville, Tennesse, USA,
          <year>1995</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Haarslev</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Möller</surname>
          </string-name>
          .
          <article-title>RACER user's guide and reference manual</article-title>
          ,
          <source>2004. Version 1</source>
          .
          <fpage>19</fpage>
          . Hamburg, Germany: University of Hamburg, Computer Science Department,
          <year>April 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hermes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Klauck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kreyÿ</surname>
          </string-name>
          , and
          <string-name>
            <surname>J. Zhang.</surname>
          </string-name>
          <article-title>Image retrieval for information systems</article-title>
          . In W. Niblack and R. Jain, editors,
          <source>Storage and Retrieval for Image and Video Databases III</source>
          , volume
          <volume>2420</volume>
          <source>of SPIE Proceedings</source>
          , pages
          <fpage>394</fpage>
          -
          <lpage>405</lpage>
          , San Jose, CA, USA,
          <year>February 1995</year>
          . SPIE.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>I. Horrocks.</surname>
          </string-name>
          <article-title>DAML+OIL: A description logic for the semantic web</article-title>
          .
          <source>IEEE Data Engineering Bulletin</source>
          ,
          <volume>25</volume>
          (
          <issue>1</issue>
          ):
          <fpage>4</fpage>
          -
          <lpage>9</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kreyÿ</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Röper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Alshuth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hermes</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Herzog</surname>
          </string-name>
          .
          <article-title>Video retrieval by still image analysis with ImageMiner</article-title>
          . In SPIE Proceedings:
          <article-title>Storage and Retrieval for Image and</article-title>
          Video Databases V, pages
          <fpage>236</fpage>
          -
          <lpage>247</lpage>
          , San Jose, CA, USA,
          <year>February 1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Lammens</surname>
          </string-name>
          .
          <article-title>A Computational Model of Color Perception and Color Naming</article-title>
          .
          <source>PhD thesis</source>
          , State University of New York, Buffalo,
          <year>June 1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Niblack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Barber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Equitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Flickner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Glasman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Petkovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yanker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Faloutsos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Taubin. The QBIC</surname>
          </string-name>
          <article-title>Project: Querying Images By Content Using Color, Texture, and Shape</article-title>
          .
          <source>In IS&amp;T/SPIE Symposium on Electronical Imaging Science &amp; Technology</source>
          , San Jose, CA, USA,
          <year>February 1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pentland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Picard</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Sclaroff</surname>
          </string-name>
          . Photobook:
          <article-title>Tools for ContentBased Manipulation of Image Databases</article-title>
          .
          <source>In SPIE Proceedings: Storage and Retrieval for Image and Video Databases II</source>
          , pages
          <fpage>34</fpage>
          -
          <lpage>47</lpage>
          , San Jose, CA, USA,
          <year>February 1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>A. W. M. Smeulders</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Worring</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Santini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Gupta</surname>
            , and
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Jain</surname>
          </string-name>
          .
          <article-title>Content-based image retrieval at the end of the early years</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>22</volume>
          (
          <issue>12</issue>
          ):
          <fpage>1349</fpage>
          -
          <lpage>1380</lpage>
          ,
          <year>December 2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>U.</given-names>
            <surname>Straccia</surname>
          </string-name>
          .
          <article-title>Reducing fuzzy description logics into classical description logics</article-title>
          .
          <source>Technical Report 2004-TR-06</source>
          , ISTI-CNR, Pisa, Italy,
          <year>February 2004</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>