<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>BTU DBIS' Personal Photo Retrieval Runs at ImageCLEF 2013</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Thomas Bottcher</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Zellhofer</string-name>
          <email>david.zellhoefer@tu-cottbus.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ingo Schmitt</string-name>
          <email>schmitt@tu-cottbus.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Brandenburg Technical University, Database and Information Systems Group</institution>
          ,
          <addr-line>Walther-Pauer-Str. 2, 03046 Cottbus</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper summarizes the results of the BTU DBIS research group's participation in the Personal Photo Retrieval subtask of ImageCLEF 2013. In order to solve the subtask, a self-developed multimodal multimedia retrieval system, PythiaSearch, is used. The discussed retrieval approaches focus on two di erent strategies. First, two automatic approaches that combine visual features and meta data are examined. Second, a manually assisted relevance feedback approach is presented. All approaches are based on a special query language, CQQL, which supports the logical combination of di erent features. Considering only automatic runs without relevance feedback that have been submitted to the subtask, DBIS reached the best overall results, while the relevance feedback-assisted approach is placed second amongst all participants of the subtask.</p>
      </abstract>
      <kwd-group>
        <kwd>Content-Based Image Retrieval</kwd>
        <kwd>Preference Based Learning</kwd>
        <kwd>Relevance Feedback</kwd>
        <kwd>Polyrepresentation</kwd>
        <kwd>Experiments</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        This paper summarizes the results of the BTU DBIS research group's
participation in the Personal Photo Retrieval subtask of ImageCLEF 2013 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        As in DBIS' participations in various ImageCLEF tasks between 2011 and
2012, the discussed approaches rely on the commuting quantum query language
(CQQL) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. CQQL is capable of combining similarity predicates as found in
information retrieval (IR) as well as relational predicates common in databases
(DB) and has been on of the main research elds of the database and information
systems work group at the Brandenburg Technical University (BTU).
      </p>
      <p>
        CQQL is an extension of the relational domain calculus, i.e., it can be
directly executed within a relational DB system [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. To combine both data access
paradigms, CQQL relies on the mathematical foundations of quantum
mechanics and logic. For the sake of brevity, the theoretical background of the query
language is omitted. For further details, please refer to the central CQQL
publication [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Additional information, e.g., the relation of CQQL to fuzzy logic
can be found in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Its relation to probabilistic IR models is discussed in [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
      </p>
      <p>
        In the scope of this paper, CQQL is used for the matching within the used
multi-modal multimedia retrieval system PythiaSearch [
        <xref ref-type="bibr" rid="ref23 ref24">24, 23</xref>
        ], which has been
developed by DBIS. The system consists of an extraction module for both
visual features and meta data that supports various image formats and PDF,
a matching component relying on CQQL, and a full-featured GUI supporting
graded relevance feedback. In order to carry out the matching between query
documents and a document collection, CQQL combines various features with
the help of logical connectors.
1.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Personal Photo Retrieval Subtask</title>
      <p>
        The personal photo retrieval subtask 2013 is an extension of 2012's pilot task.
The current subtask uses 5,555 image documents that have been sampled from
personal photo collections. In contrast to the pilot phase of the task, 2013's
focus lies on the evaluation of retrieval algorithms using di erent search
strategies and user groups. One objective of the task is to assess whether a retrieval
algorithm's e ectiveness is stable for di erent user groups [
        <xref ref-type="bibr" rid="ref10 ref28">28, 10</xref>
        ]. To test the
e ectiveness for di erent users, multiple ground truths are provided re ecting
relevance assessments of CBIR/MIR experts, laypersons or the like.
      </p>
      <p>
        The subtask does not provide any training data. Hence, it has to be solved
ad-hoc. The participants are given multiple query-by-example (QBE) documents
and/or browsed documents and are asked to nd the best matching documents
illustrating an event or depicting a visual concept. In total, 74 topics are
available. In contrast to the last year, the topics are no longer separated into visual
concepts or events [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Furthermore, the information need (IN) for each topic is
not explicitly given. Instead, the IN is concealed inside the query or/and browsed
documents. To infer an IN, the participants get 0-1 QBE document and up to 3
browsed documents. For some topics, there are no QBE documents in order to
model the following usage behavior: a user browsed a personal photo collection
and toggled an action to show more similar images without stating an explicit
preference [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The provided browsed documents can sometimes be irrelevant or
have only a low degree of relevance. A more detailed description of the subtask's
experimental setup, objective, and participation is available in [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
2
      </p>
      <sec id="sec-2-1">
        <title>PythiaSearch - an Interactive and</title>
      </sec>
      <sec id="sec-2-2">
        <title>Multimedia Retrieval System</title>
      </sec>
      <sec id="sec-2-3">
        <title>Multi-modal</title>
        <p>
          The interactive retrieval system PythiaSearch [
          <xref ref-type="bibr" rid="ref23 ref24">24, 23</xref>
          ] forms the core for both the
interactive and non-interactive retrieval experiments that are described in this
paper. In order to express their IN, users can input images (following the QBE
paradigm that is used in the subtask), (multilingual) texts, or PDF documents.
Additionally, it supports a relevance feedback (RF) process that can be used to
personalize the query results based on the user's interaction with the system.
The interactive parts rely on a common code base for feature extraction and
similarity calculation with the baseline system [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ] that has been provided by
the organizer of the subtask. Figure 1 shows the GUI of the system. A full
description of the GUI and its conceptual model has been published before [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ].
As said before, we will neglect the theoretical foundations of CQQL to facilitate
the understanding of this paper. Mathematically interested readers are
recommended to refer to [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] and [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. The arithmetic evaluation of a CQQL statement
which consists of multiple conditions that are connected by logical connectors is
directly derived from the mathematical framework of quantum mechanics and
logic. In this section, we will sketch the arithmetic evaluation of CQQL as far as
it is necessary for the understanding of this paper.
        </p>
        <p>Let f'(d) be the evaluation of a document d w.r.t. a CQQL query. To
construct a CQQL query, various conditions ' can be linked in an arbitrary manner
using the conjunction (1), disjunction (2), or negation (3). If ' is atomic, f'(d)
can be directly evaluated yielding a value out of the interval [0; 1]. For the scope
of this paper, an atomic condition is the result of a similarity measure, e.g., the
similarity of the QBE document's color histogram and d's color histogram; or a
Boolean evaluation calculated by a DB system or the like.</p>
        <p>
          After a necessary syntactical normalization step [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ], the evaluation of a CQQL
query is performed by recursively applying the succeeding formulas until the
atomic base case is reached:
        </p>
        <p>f'1^'2 (d) = f'1 (d) f'2 (d)
f'1_'2 (d) = f'1 (d) + f'2 (d)</p>
        <p>(f'1 (d) ^ f'2 (d))
f:'(d) = 1
f'(d)
(1)
(2)</p>
        <p>An example of the arithmetic evaluation of the query that is used in this
paper is given in Section 3. In accordance with the Copenhagen interpretation
of quantum mechanics, the result of an evaluation of a document d yields the
probability of relevance of d w.r.t. the query. This probability value is then used
for the ranking of the result list of documents.</p>
        <p>
          Weighting in CQQL In order to re ect the need for the personalization of a
query and as a necessary step for the support of relevance feedback (RF), CQQL
has been extended with a weighting scheme [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The weights in CQQL can be
used to steer the impact of a condition on the overall evaluation result. Weighting
is a crucial part of the machine-based learning supported RF mechanism that is
used in Section 3 and discussed in more detail in [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] and [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ].
        </p>
        <p>The weighting in CQQL is fully embedded into the logical query. That is,
a query maintains its logical properties while weights are used. To illustrate,
Equation 4 denotes a weighted conjunction, whereas Equation 5 states a weighted
disjunction. A weight i is directly associated with a logical connector and steers
the in uence of a condition 'i on the evaluation. To evaluate a weighted CQQL
query, the weights are syntactically replaced by constant values according to the
following rules:
'1 ^ 1; 2 '2
'1 _ 1; 2 '2
('1 _ : 1) ^ ('2 _ : 2)
('1 ^ 1) _ ('2 ^ 2)
(4)
(5)
2.2</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Result Personalization and Relevance Feedback</title>
      <p>
        As implied before, the relevance judgement of a query's results is very subjective
with respect to the user's IN. To re ne a subjective IN, PythiaSearch supports
a gradual relevance feedback on the basis of partially ordered sets (posets) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
Users can input a poset of documents which contains an arbitrary amount of
documents at various relevance levels. For instance, a poset can de ne a preference
expressing that a document Di is better than a document Dj . This form of user
input requires no background information of the underlying features and is based
on the subjective qualitative perception of the user alone. Figure 2 illustrates
the mechanism as it is implemented in PythiaSearch's GUI. In this example,
the second ring contains documents considered more relevant than those on the
third etc., while the center contains the current QBE document.
      </p>
      <p>
        Internally, a machine-based learning algorithm (a downhill simplex variant)
is used to nd appropriate weight values for a given CQQL query ful lling the
input preferences. The actual algorithm and its properties is described separately
in [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
3
      </p>
      <sec id="sec-3-1">
        <title>Experimental Setup and Results</title>
        <p>
          Motivated by CQQL's support for formulating multi-modal queries, DBIS
participated in the 2011 Wikipedia Retrieval task at ImageCLEF [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] combining
textual and visual features. This year's participation in the Personal Photo
Retrieval subtask focuses on a CQQL-based combination of visual features and
the accompanying meta data. This poses a new challenge for the working group
because the studies carried out before were not relying very much on meta data.
        </p>
        <p>Our experiments for the Personal Photo Retrieval subtask can be subdivided
into two types of runs. First, fully automatic runs which demonstrate the e
ectiveness of a CQQL-based logical combination of features with di erent origin,
i.e., visual and meta data features. Second, the performance of the
aforementioned RF mechanism is investigated (see Section 3.4).
3.1</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Used Features</title>
      <p>
        Over the last years, the DBIS working group conducted a lot of experiments on
various image collections, ranging from the Caltech collections [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] to
MSRAMM [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] in order to assess the retrieval e ectiveness of di erent low-level visual
features. This investigation of single features forms the basis for the decision
which features to combine with CQQL.
      </p>
      <p>PythiaSearch supports the extraction of low-level global and local visual
features, e.g., color, edges and texture features or local features like SIFT and
SURF. In total, the extraction component o ers more than 30 visual features.
Additionally, the system allows the extraction of common image meta data such
as Exif, IPTC, or XMP. This meta data, e.g., GPS coordinates, the camera
model, or the image orientation extends the variety of features that can be used
for the matching of documents. In accordance with the rules of the subtask,
IPTC-based data is ignored in the following experiments. Table 1 lists all
features that are used in the experiments.</p>
      <p>
        The feature extraction and similarity calculation functionality used by
PythiaSearch resembles the baseline system that is provided by the subtask organizer1.
For a description, see [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. The main di erences between the baseline system
and the system used for the described experiments are the CQQL support, the
supplementary RF mechanism, and the GUI.
3.2
      </p>
    </sec>
    <sec id="sec-5">
      <title>Examined CQQL Query</title>
      <p>Based on preparatory study on the retrieval e ectiveness of various visual
lowlevel features and an examination of the subtask's thematic orientation on both
visual concepts and events, an appropriate CQQL query had to be de ned. The
core idea of the examined CQQL query, which is shown in Equation 6, is to use
low-level features that did show a good performance over all six test collections
1 Please note that the organizer of the subtask did not actively participate in the
experiments described in this paper nor did he release additional information to the
working group that other participants could not obtain. Alas, he carried out many
of the pre-studies including the investigation of generally e ective CQQL queries.
Furthermore, he had a major impact on the development of PythiaSearch and the
underlying learning algorithm.
a picture, spatial and temporal proximity as well as a similar camera model are
valid indicators. Hence, the core CQQL query is enriched by a person presence
condition in form of a Boolean predicate and the aforementioned features derived
from Exif meta data.</p>
      <p>This concept results in the following CQQL query that uses a weighted
conjunction of 18 conditions, whereas all weights are set to 1 initially to express the
equal importance of all conditions.</p>
      <p>^(ACCsim ; BICsim ; CEDDsim ; ColorHistBordersim ; ColorHistCentersim ;
i</p>
      <p>ColorHistsim ColorLayoutsim ; ColorStructuresim ; DominantColorsim ;
(6)
EdgeHistsim ; FCTHsim ; Regionshapesim ; ScalableColorsim ; Tamurasim ;
GPSsim ; modelsim ; timesim ; Personsim )</p>
      <p>
        The value of each condition is determined by a distance measure such as the
Euclidean distance of the corresponding feature between the QBE document and
the retrieved document which is then transformed into a similarity measure in the
interval of [0; 1]. Boolean conditions are evaluated traditionally. The calculation
of the GPS coordinate similarity is carried out as we did for the ImageCLEF
2012 Plant Identi cation task [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
      </p>
      <p>GP Ssim = 1
p(71:5 (longx
(7)
(8)
(9)
(10)
(11)
3.3</p>
    </sec>
    <sec id="sec-6">
      <title>Automatic Runs</title>
      <p>The 2013 Personal Photo Retrieval subtask provides QBE documents as well as
browsed documents. For the approaches without RF, we use the provided data
in two ways.</p>
      <p>First, we use only the QBE de ned documents (run1 ) because of the fact,
that the provided browsed documents can contain misleading information,
respectively contain images that did not ful ll the users IN. For the topics for
which no QBE document is available, we use all browsed documents instead.
Whether the browsed documents are really relevant for the user's IN cannot be
determined automatically. Thus, this approach might be a ected by irrelevant
input to the retrieval system.</p>
      <p>Second, we assume that all documents (no matter if they are QBE or browsed
documents) are equally meaningful regarding the user's IN. Hence, we use all
documents as QBE documents with no special ranking for the labelled QBE
document. This approach is labeled run2 in Figure 4.
3.4</p>
    </sec>
    <sec id="sec-7">
      <title>Manual Relevance Feedback Run</title>
      <p>The main objective for the manually assisted approach (run 3 ) is to remove
misleading browsed image documents from the initial query and to improve the
retrieval quality using the graded RF approach that is described in Section 2.2.
The experiment is carried out interactively with the PythiaSearch GUI (see
Section 2). Using the aforementioned preference-based approach, irrelevant, relevant
documents and the relationship between them can be expressed as a poset. To
simplify the user interaction, the GUI o ers 3 levels of relevance and a \garbage
can" to collect completely irrelevant documents. Figure 2 shows the three levels
where the center (level 1) contains the query document(s). All documents in
level 2 are more relevant than documents in level 3 and 4, whereas documents
in level 3 are more relevant than documents in level 4. All preferences together
create a poset. Documents marked as completely irrelevant are removed from
the query results and have a negative impact on the machine-based learning
algorithm similar to negative QBE documents.</p>
      <p>In order to keep control over the time consumption of submitting 74 queries
manually, we have de ned some restrictions for the RF-based experiment.</p>
      <p>First, at most one RF iteration is carried out, i.e., we model the behavior of
an impatient user.</p>
      <p>Second, the assessment of the quality of the results is based on the top-30
results only. This number of documents can be easily inspected without scrolling
and requires signi cantly less time than inspecting the top-50 or top-100. Because
of this strategy, it may happen that no RF is carried out all because the top-30
results seem relevant to the interacting user.</p>
      <p>Third, to simulate a user that avoids a large amount of interaction with the
system, a total of 6 images is used to de ne the preferences used during RF.</p>
      <p>In general, obviously irrelevant images from the given IN speci cation were
removed from the input. Nevertheless, during the submission of all 74 sample
queries it was not always possible to identify the IN without background
knowledge. In these cases, the RF process is skipped.
4</p>
      <sec id="sec-7-1">
        <title>Results</title>
        <p>With reference to the o cial results (see Figure 4), our best run, i.e., run 3,
achieves rank 5 in the overall ranking for the average user. Compared to the
results of all participants of the subtask, DBIS is ranked second. Focussing on
the NCDCG cut 5 we reach about 97%, on NDCG cut 100 about 87% and on
MAP cut 100 78% of the best obtained retrieval score.</p>
        <p>When only automatic runs are considered, i.e., runs without RF, DBIS
achieves the best results. Unfortunately no other runs without RF that use all
available modalities (visual and meta data features) and IN information (QBE
and browsed documents) were submitted. Due to these circumstances, a resilient
interpretation of our results is hardly possible. Anyhow, we assume that the
inclusion of meta data helps to distance our from the other approaches. In any
case, further information about the techniques used by the other participants is
needed.</p>
        <p>Generally speaking, the outcome of the presented experiments is fully
satisfying. Though, we acknowledge room for improvements for the RF-based run.
We assume that an inspection of more than the top-30 results and the inclusion
of more preferences might have an impact on the retrieval e ectiveness.
Furthermore, a speci cation of the actual IN in textual form will help human assessors
during RF because it will enable them to provide RF for every topic. As said
before, we could not provide RF for all topics because of the lack of this kind of
information. In consequence, we expect an improvement of the RF e ectiveness
when this information can be used.</p>
        <p>One objective of this subtask is to examine the robustness of a retrieval
approach with respect to di erent user groups (e.g. IT experts, non-IT users or
gender-speci c groups). Figure 3 shows the variance of the MAP cut 100 scores
of our three submitted runs between the di erent user groups. The di erences
between all groups is relatively small, e.g., 0.395 vs. 0.425 for run 3. The
difference in MAP cut 100 between the best and the worst run is about 7-9%.
Interestingly, the results for female users tend to be the best, whereas the
average user score tends to be worst. This e ect can also be observed in the results
of the other groups. Alas, we are not sure why this e ect is present amongst all
groups.
The results of our participation in the ImageCLEF 2013 Personal Photo Retrieval
subtask are motivating. Although, DBIS achieved a good e ectiveness rank, there
are areas that need further research.</p>
        <p>First, we plan to analyze single features and meta data in more detail to nd
out which features or meta data contributes most to the retrieval quality. As said
before, for the RF-supported approach there are various optimizations possible.
In particular, the restriction to one RF iteration seems to limit the retrieval
quality. First informal experiments show that up to three iterations can give a
great performance boost. Furthermore, we assume that the inclusion of RF on
all topics will lead to a performance improvement. Another interesting research
question is the development of the weight values during the RF iterations in order
to reveal whether some features do not contribute to the retrieval e ectiveness
at all.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Balko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Signature Indexing and Self-Re nement in Metric Spaces</article-title>
          .
          <source>Cottbus</source>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Bottcher, T.,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Zellhofer,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          :
          <article-title>Btu dbis' plant identi cation runs at imageclef 2012</article-title>
          . In: CLEF (Online Working Notes/Labs/Workshop) (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Caputo</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thomee</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paredes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zellhoefer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goeau</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez</surname>
            <given-names>Gomez</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Garcia</surname>
          </string-name>
          <string-name>
            <surname>Varea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Cazorla</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>ImageCLEF 2013: the Vision, the Data and the Open Challenges</article-title>
          , CLEF 2013 Working Notes, Valencia, Spain (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Chatzichristo s,
          <string-name>
            <given-names>A.S.</given-names>
            ,
            <surname>Boutalis</surname>
          </string-name>
          , S.Y.:
          <article-title>CEDD: color and edge directivity descriptor: a compact descriptor for image indexing and retrieval</article-title>
          .
          <source>In: Proceedings of the 6th international conference on Computer vision systems</source>
          . pp.
          <volume>312</volume>
          {
          <fpage>322</fpage>
          . ICVS'
          <volume>08</volume>
          , Springer-Verlag (
          <year>2008</year>
          ), http://dl.acm.org/citation.cfm?id=
          <volume>1788524</volume>
          .
          <fpage>1788559</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Chatzichristo s,
          <string-name>
            <given-names>A.S.</given-names>
            ,
            <surname>Boutalis</surname>
          </string-name>
          , S.Y.:
          <article-title>FCTH: Fuzzy Color and Texture Histogram - A Low Level Feature for Accurate Image Retrieval</article-title>
          .
          <source>In: Proceedings of the 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services</source>
          . pp.
          <volume>191</volume>
          {
          <fpage>196</fpage>
          . WIAMIS '08, IEEE Computer Society (
          <year>2008</year>
          ), http: //dx.doi.org/10.1109/WIAMIS.
          <year>2008</year>
          .24
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Cieplinski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeannin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ohm</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pickering</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yamada</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>MPEG-7 Visual XM version 8.1</article-title>
          .
          <string-name>
            <surname>Pisa</surname>
          </string-name>
          ,
          <string-name>
            <surname>Italy</surname>
          </string-name>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fei-Fei</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fergus</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perona</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Learning generative visual models from few training examples an incremental Bayesian approach tested on 101 object categories</article-title>
          .
          <source>In: Proceedings of the Workshop on Generative-Model Based Vision</source>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Gri n</surname>
          </string-name>
          , G.,
          <string-name>
            <surname>Holub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perona</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Caltech-256
          <string-name>
            <given-names>Object</given-names>
            <surname>Category Dataset</surname>
          </string-name>
          (
          <year>2007</year>
          ), http://authors.library.caltech.
          <source>edu/7694</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>R.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mitra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>W.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zabih</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Image Indexing Using Color Correlograms</article-title>
          .
          <source>In: Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)</source>
          . pp.
          <volume>762</volume>
          {. CVPR '97, IEEE Computer Society (
          <year>1997</year>
          ), http://dl.acm.org/citation.cfm?id=
          <volume>794189</volume>
          .
          <fpage>794514</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. ImageClef: Personal Photo Retrieval
          <year>2013</year>
          . http://www.imageclef.org/2013/ photo/retrieval, 6. June 2013.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lehrack</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>QSQL: Incorporating Logic-Based Retrieval Conditions into SQL</article-title>
          . In: Kitagawa,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Ishikawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            ,
            <surname>Watanabe</surname>
          </string-name>
          , C. (eds.)
          <article-title>Database Systems for Advanced Applications</article-title>
          , 15th International Conference, DASFAA 2010, Tsukuba, Japan, April 1-
          <issue>4</issue>
          ,
          <year>2010</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          , Lecture Notes in Computer Science, vol.
          <volume>5981</volume>
          , pp.
          <volume>429</volume>
          {
          <fpage>443</fpage>
          . Springer (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Manjunath</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salembier</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sikora</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Introduction to MPEG-7: Multimedia Content Description Interface</article-title>
          . John Wiley &amp; Sons, Inc., New York, NY, USA (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. van Rijsbergen,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>The Geometry of Information Retrieval</article-title>
          . Cambridge University Press, Cambridge, England (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Schaefer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stich</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>UCID - An Uncompressed Colour Image Database</article-title>
          .
          <source>In: Proc. SPIE</source>
          , Storage and
          <string-name>
            <given-names>Retrieval</given-names>
            <surname>Methods</surname>
          </string-name>
          and Applications for Multimedia, pp.
          <volume>472</volume>
          {
          <fpage>480</fpage>
          . San Jose, USA (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Weighting in CQQL</article-title>
          .
          <source>Cottbus</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>QQL: A DB&amp;IR Query Language</article-title>
          .
          <source>The VLDB Journal</source>
          <volume>17</volume>
          (
          <issue>1</issue>
          ),
          <volume>39</volume>
          {
          <fpage>56</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , Zellhofer,
          <string-name>
            <surname>D.</surname>
          </string-name>
          , Nurnberger, A.:
          <article-title>Towards quantum logic based multimedia retrieval</article-title>
          .
          <source>In: IEEE (ed.) Proceedings of the Fuzzy Information Processing Society (NAFIPS)</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2008</year>
          ),
          <volume>10</volume>
          .1109/NAFIPS.
          <year>2008</year>
          .4531329
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Stehling</surname>
            ,
            <given-names>O.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nascimento</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          , Falca~o,
          <string-name>
            <surname>X.A.</surname>
          </string-name>
          :
          <article-title>A compact and e cient image retrieval approach based on border/interior pixel classi cation</article-title>
          .
          <source>In: Proceedings of the eleventh international conference on Information and knowledge management</source>
          . pp.
          <volume>102</volume>
          {
          <fpage>109</fpage>
          . CIKM '02,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2002</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/584792.584812
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Tamura</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mori</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yamawaki</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Texture features corresponding to visual perception</article-title>
          .
          <source>IEEE Transactions on System, Man and Cybernatic</source>
          <volume>8</volume>
          (
          <issue>6</issue>
          ),
          <volume>460</volume>
          {
          <fpage>472</fpage>
          (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hua</surname>
            ,
            <given-names>X.S.:</given-names>
          </string-name>
          <article-title>MSRA-MM: Bridging Research and Industrial Societies for Multimedia Information Retrieval (</article-title>
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Z.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiederhold</surname>
          </string-name>
          , G.:
          <article-title>SIMPLIcity: Semantics-sensitive Integrated Matching for Picture Libraries</article-title>
          .
          <source>In: Proceedings of the 4th International Conference on Advances in Visual Information Systems</source>
          . pp.
          <volume>360</volume>
          {
          <fpage>371</fpage>
          . VISUAL '
          <volume>00</volume>
          ,
          <string-name>
            <surname>SpringerVerlag</surname>
          </string-name>
          (
          <year>2000</year>
          ), http://portal.acm.org/citation.cfm?id=
          <volume>647061</volume>
          .
          <fpage>714442</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22. Zellhofer, D.:
          <article-title>An Extensible Personal Photograph Collection for Graded Relevance Assessments and User Simulation</article-title>
          .
          <source>In: Proceedings of the ACM International Conference on Multimedia Retrieval. ICMR '12</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23. Zellhofer, D.:
          <article-title>A permeable expert search strategy approach to multimodal retrieval</article-title>
          .
          <source>In: Proceedings of the 4th Information Interaction in Context Symposium</source>
          . pp.
          <volume>62</volume>
          {
          <fpage>71</fpage>
          . IIIX '12,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2012</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/ 2362724.2362739
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24. Zellhofer,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Bertram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , Bottcher, T.,
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Tillmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Schmitt</surname>
          </string-name>
          , I.:
          <article-title>PythiaSearch { A Multiple Search Strategy-supportive Multimedia Retrieval System</article-title>
          .
          <source>In: Proceedings of the 2nd ACM International Conference on Multimedia Retrieval</source>
          . p. to appear.
          <source>ICMR '12</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25. Zellhofer,
          <string-name>
            <surname>D.</surname>
          </string-name>
          , Bottcher, T.:
          <article-title>BTU DBIS' Multimodal Wikipedia Retrieval Runs at ImageCLEF 2011</article-title>
          . In: Vivien Petras, Pamela Forner and
          <string-name>
            <surname>Paul D.</surname>
          </string-name>
          Clough (eds.)
          <article-title>CLEF 2011 Labs</article-title>
          and Workshop, Notebook Papers,
          <fpage>19</fpage>
          -22
          <source>September</source>
          <year>2011</year>
          , Amsterdam, The Netherlands (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26. Zellhofer,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Frommholz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Lalmas</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>van Rijsbergen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          : Towards
          <string-name>
            <surname>Quantum-Based</surname>
            <given-names>DB</given-names>
          </string-name>
          +
          <article-title>IR Processing Based on the Principle of Polyrepresentation</article-title>
          . In: Clough,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Foley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Kraaij</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Murdoch</surname>
          </string-name>
          , V. (eds.)
          <source>Advances in Information Retrieval - 33rd European Conference on IR Research</source>
          , ECIR
          <year>2011</year>
          , Dublin, Ireland,
          <source>April 18-21</source>
          ,
          <year>2011</year>
          .
          <source>Proceedings, Lecture Notes in Computer Science</source>
          , vol.
          <volume>6611</volume>
          , pp.
          <volume>729</volume>
          {
          <fpage>732</fpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27. Zellhofer,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          :
          <article-title>A Preference-based Approach for Interactive Weight Learning: Learning Weights within a Logic-Based Query Language</article-title>
          . Distributed and Parallel
          <string-name>
            <surname>Databases</surname>
          </string-name>
          (
          <year>2009</year>
          ), doi:10.1007/s10619-009-7049-4
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28. Zellhofer, D.:
          <article-title>Overview of the ImageCLEF 2013 Personal Photo Retrieval Subtask</article-title>
          .
          <article-title>CLEF 2013 working notes</article-title>
          , Valencia, Spain,
          <year>2013</year>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>