<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Mosaic Test: Benchmarking Colour-based Image Retrieval Systems Using Image Mosaics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>William Plant</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joanna Lumsden</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ian T. Nabney</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Engineering and, Applied Science, Aston University</institution>
          ,
          <addr-line>Birmingham</addr-line>
          ,
          <country country="UK">U.K.</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Evaluation and benchmarking in content-based image retrieval has always been a somewhat neglected research area, making it di cult to judge the e cacy of many presented approaches. In this paper we investigate the issue of benchmarking for colour-based image retrieval systems, which enable users to retrieve images from a database based on lowlevel colour content alone. We argue that current image retrieval evaluation methods are not suited to benchmarking colour-based image retrieval systems, due in main to not allowing users to re ect upon the suitability of retrieved images within the context of a creative project and their reliance on highly subjective ground-truths. As a solution to these issues, the research presented here introduces the Mosaic Test for evaluating colour-based image retrieval systems, in which test-users are asked to create an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. We report on our ndings from a user study which suggests that the Mosaic Test overcomes the major drawbacks associated with existing image retrieval evaluation methods, by enabling users to re ect upon image selections and automatically measuring image relevance in a way that correlates with the perception of many human assessors. We therefore propose that the Mosaic Test be adopted as a standardised benchmark for evaluating and comparing colour-based image retrieval systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Image databases</kwd>
        <kwd>content-based image retrieval</kwd>
        <kwd>image mosaic</kwd>
        <kwd>performance evaluation</kwd>
        <kwd>benchmarking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Copyright ⃝c 2011 for the individual papers by the papers’ authors.
Copying permitted only for private and academic purposes. This volume is
published and copyrighted by the editors of euroHCIR2011.</p>
    </sec>
    <sec id="sec-2">
      <title>1. INTRODUCTION</title>
      <p>
        Colour-based image retrieval systems such as Chromatik [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
MultiColr [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and Picitup [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] enable users to retrieve images
from a database based on colour content alone. Such a
facility is particularly useful to users across a number of di erent
creative industries, such as graphic, interior and fashion
design [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Surprisingly, however, little research appears to
have been conducted into evaluating colour-based image
retrieval systems. Currently, there is no standardised measure
and image database to evaluate the performance of an image
retrieval system [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The most commonly applied evaluation
methods are those of precision and recall [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and the
target search and category search tasks [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The precision and
recall measure is used to evaluate the accuracy of image
results returned by a system in response to a query, whilst the
target search and category search tasks are both user-based
evaluation strategies in which test-users are asked to retrieve
images from a database that are relevant to a given target,
using the image retrieval system that is being evaluated.
In this research, we argue that the image retrieval system
evaluation strategies listed above are not suitable for
evaluating and benchmarking colour-based image systems for
two fundamental reasons. Firstly, none of the above
evaluation methods allow test-users to perform an important
process often conducted by creative users, known as re
ectionin-action [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In re ection-in-action, a creative project is
modi ed by a user and then reviewed by the user after the
modi cation. After assessing their modi cation, the creative
individual will then decide whether to maintain or discard
the modi cation to the project. As an example, a graphic
designer will add an image to a web page before making an
assessment as to its aesthetic suitability. Secondly, the
category search and precision and recall measures require an
image database and associated ground-truth (a manually
generated list pre-de ning which images in the database are
similar to others) for de ning image relevance during a
system evaluation. Such human-based de nitions of similarity,
however, can often be highly subjective resulting in retrieved
images being incorrectly assessed as irrelevant.
      </p>
      <p>As a result of these drawbacks, no method currently exists
for reliably evaluating colour-based image retrieval systems.
The following section introduces the Mosaic Test which has
been developed to address the current problem, providing
a reliable means for benchmarking colour-based image
retrieval systems.</p>
    </sec>
    <sec id="sec-3">
      <title>2. THE MOSAIC TEST</title>
      <p>
        For the Mosaic Test, participants are asked to manually
create an image mosaic (comprising 16 cells) of a predetermined
target image. An image mosaic ( rst devised by Silvers [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ])
is a form of art that is typically generated automatically
through use of content-based image analysis. A target
image is divided into cells, each of which is then replaced by a
small image with similar colour content to the
corresponding cell in the target image. Viewed from a distance, the
smaller images collectively appear to form the target image,
whilst viewing an image mosaic close up reveals the detail
contained within each of the smaller images. An example of
an automatically generated image mosaic is shown in
Figure 1.
For target images in the Mosaic Test, photographs of jelly
beans are used. The images of jelly beans produce a bright,
interesting target image for participants to create in mosaic
form and the generation of an image mosaic that appears
visually similar to the target image is also very achievable.
More importantly, retrieving images from a database
comprising large areas of a small number of distinct colours is a
practise commonly performed by users in creative industries.
To complete their image mosaics, participants must identify
the colours required to ll an image mosaic cell (by
inspecting the corresponding region in the target image), and
retrieve a suitably coloured image from the 25,000 contained
within the MIRFLICKR-25000 image collection [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] using the
colour-based evaluation system under evaluation. When
selecting images for use in their image mosaic, users can add,
move or remove images accordingly to assess the suitability
of images within the context of their image mosaic. It is
in this way that the Mosaic Test overcomes the rst
major drawback of existing evaluation methods, by enabling
participants to perform the creative practise of re
ection-inaction [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Upon completion of an image mosaic, the time
required by the user to nish the image mosaic is recorded,
along with the visual accuracy of their creation in
comparison with the initial target image. Through analysing
the accuracy of user-generated image mosaics (in a manner
which correlates with the perception of a number of di erent
human assessors), the Mosaic Test is able to overcome the
second drawback associated with existing evaluation
techniques. This is because it does not rely on a highly subjective
image database ground-truth. The image mosaic accuracy
measure adopted for use with the Mosaic Test is discussed
further in Section 3.1. Additionally, participants are asked
to indicate their subjective experience of workload (using
the NASA TLX scales [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]) post test.
      </p>
      <p>The time (number of seconds), subjective workload (user
NASA-TLX ratings) and relevance (image mosaic accuracy)
measures achieved by colour-based image retrieval systems
evaluated using the Mosaic Test can be directly compared
and used for benchmarking. When comparing the Mosaic
Test measures achieved by di erent systems, the more
effective colour-based image retrieval system will be the one
that enables users to create the most accurate image
mosaics, fastest and with the least workload.</p>
    </sec>
    <sec id="sec-4">
      <title>2.1 Mosaic Test Tool</title>
      <p>To support users in their manual creation of image mosaics
using the Mosaic Test, we have developed a novel software
tool in which an image mosaic of a predetermined target
image can be created using simple drag and drop functions.
We refer to this as the Mosaic Test Tool. The Mosaic Test
Tool has been designed so that it can be displayed
simultaneously with the colour-based image retrieval system
under evaluation (as can be seen in Figure 2). This removes
the need for users to constantly switch between application
windows, and permits users to easily drag images from the
colour-based image retrieval system being tested to their
image mosaic in the Mosaic Test Tool. It is important to note
that the facility to export images through drag and drop
operations is the only requirement of a colour-based image
retrieval system for it to be compatible with the Mosaic Test
Tool and thus the Mosaic Test.</p>
      <p>The target image and image mosaic are displayed
simultaneously on the Mosaic Test Tool interface to allow users to
manually inspect and identify the colours (and colour
layout) required for each image mosaic cell. As can be seen
in Figure 2, the target image (the image the user is trying
to replicate in the form of an image mosaic) is displayed in
the top half of the Mosaic Test Tool. Coupled with the ease
in which images can be added to, or removed from, image
mosaic cells, users of the Mosaic Test Tool can simply
assess the suitability of a retrieved image by dragging it to the
appropriate image mosaic cell and viewing it alongside the
other image mosaic cells.</p>
    </sec>
    <sec id="sec-5">
      <title>3. USER STUDY</title>
      <p>To evaluate the Mosaic Test, we recruited 24 users to
participate in a user study. Participants were given written
instructions explaining the concept of an image mosaic and
the functionality of the Mosaic Test Tool. A practise
session was undertaken by each participant, in which they were
asked to complete a practise image mosaic using a small
selection of suitable images. Participants were then asked to
complete 3 image mosaics using 3 di erent colour-based
image retrieval systems. To ensure that users did not simply
learn a set of database images suitable for use in a solitary
image mosaic, 3 di erent target images were used. These
target images were carefully selected so that the number of
jelly beans (and thus colours) in each were evenly balanced,
with only the colour and layout of the jelly beans varying
between the target images. To also ensure that results were
not e ected by a target image being more di cult to
create in image mosaic form than another, the order in which
the target images were presented to participants remained
constant whilst the order in which the colour-based image
retrieval systems were used was counter balanced. After
completing the 3 image mosaics, participants were asked to
rank each of their creations in ascending order of `closeness'
to its corresponding target image.</p>
      <p>We wanted to investigate whether the Mosaic Test does
overcome the drawbacks of existing evaluation strategies so that
it may be adopted as a reliable benchmark of colour-based
image retrieval systems. Firstly, we hypothesised that users
in the study would perform re ection-in-action and so we
wanted to observe whether this was indeed true for
participants when judging the suitability of images retrieved from
the database. Secondly, we were eager to investigate which
method should be adopted for measuring the accuracy of an
image mosaic in the Mosaic Test.</p>
    </sec>
    <sec id="sec-6">
      <title>3.1 Assessing Image Mosaic Accuracy</title>
      <p>
        As an image mosaic is an art form intended to be viewed
and enjoyed by humans, it seems logical that the adopted
measure of image mosaic accuracy - i.e., how close an image
mosaic looks to its intended target image - should correlate
with the inter-image distance perceptions of a number of
human assessors. An existing measure for automatically
computing the distance between an image mosaic and its
corresponding target image is the Average Pixel-to-Pixel (APP)
distance [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The APP distance is expressed formally in
Equation (1), where i is 1 of a total n corresponding pixels
in the mosaic image M and target image T , and r, g and b
are the red, green and blue colour values of a pixel.
      </p>
      <p>AP P =</p>
      <p>Pin=0 q(rMi
ri )2 + (gMi
T
gi )2 + (biM
T</p>
      <p>biT )2
n
(1)
We were eager to compare the existing APP image mosaic
distance measure with a variety of image colour
descriptors (and associated distance measures) commonly used for
content-based image retrieval, to discover which best
correlates with human perceptions of image mosaic distance.
To do this, we calculated the image mosaic distance
rankings according to the existing measure and several colour
descriptors (and their associated distance measures), and
then calculated the Spearman's rank correlation coe cient
between each of the tested distance measures and the
rankings assigned by the users in our study.</p>
      <p>
        For the image colour descriptors (and associated distance
measures), we rstly tested the global colour histogram (GCH)
as an image descriptor. A colour histogram contains a
normalised pixel count for each unique colour in the colour
space. We used a 64-bin histogram, in which each of the red,
green and blue colour channels (in an RGB colour space)
were quantised to 4 bins (4 x 4 x 4 = 64). We adopted
the Euclidean distance metric to compare the global colour
histograms of the image mosaics and corresponding target
images. We also tested local colour histograms (LCH) as an
image descriptor. For this, 64-bin colour histograms were
calculated for each image mosaic cell (for the image mosaic
descriptor), and its corresponding area in the target image
(for the target image descriptor). The average Euclidean
distance between all of the corresponding colour histograms
(in the image mosaic and target image LCH descriptors) was
used to compare LCH descriptors. Finally, we tested (along
with their associated distance measures) the MPEG-7 colour
structure (MPEG-7 CST) and colour layout (MPEG-7 CL)
descriptors [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], as well as the auto colour correlogram
descriptor (ACC) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The auto colour-correlogram (ACC) of an image can be
described as a table indexed by colour pairs, where the k-th
entry for colour i speci es the probability of nding another
pixel of colour i in the image at a distance k. For the
MPEG7 colour structure descriptor (MPEG-7 CST), a sliding
window (8 × 8 pixels in size) moves across the image in the
HMMD colour space [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] (reduced to 256 colours). With
each shift of the structuring element, if a pixel with colour i
occurs within the block, the total number of occurrences in
the image for colour i is incremented to form a colour
histogram. The distance between two MPEG-7 CSTs or two
ACCs can be calculated using the L1 (or city-block)
distance metric. Finally, the MPEG-7 colour layout descriptor
(MPEG-7 CL) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] divides an image into 64 regular blocks,
and calculates the dominant colour of the pixels within each
block [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The cumulative distance between the colours (in
the Y CbCr colour space) of corresponding blocks forms the
measure of similarity between 2 MPEG-7 CL descriptors.
      </p>
      <p>Accuracy Measure
MPEG-7 CST
APP
GCH
MPEG-7 CL
LCH
ACC</p>
    </sec>
    <sec id="sec-7">
      <title>4. RESULTS</title>
      <p>Table 1 shows the Spearman's rank correlation coe cients
(rs) calculated between the human-assigned rankings and
each of the rankings generated by the tested colour
descriptors. We compare the rs correlation coe cient for each
measure tested with the critical value of r, which at a 5%
signi cance level with 22 d.f. (24 − 2) equates to 0.423. Any
rs value greater than this critical value can be considered a
signi cant correlation at a 5% level.</p>
    </sec>
    <sec id="sec-8">
      <title>5. DISCUSSION</title>
      <p>
        We observed the actions taken by the participants of the user
study when creating their image mosaics. It was clear that
the majority of users performed re ection-in-action when
assessing the relevance (or suitability) of images retrieved
from the database for use in their image mosaics. As
participants of a Mosaic Test were able to perform this re
ectionin-action [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], it is clear that the Mosaic Test also overcomes
the rst of the two major drawbacks present in current
image retrieval evaluation methods. As shown in Table 1, the
MPEG-7 colour structure descriptor (MPEG-7 CST) was
the only colour descriptor (and associated distance measure)
we found to correlate with human perceptions of image
mosaic distance at the 5% signi cance level. Therefore, by
measuring the L1 (or city-block) distance between the MPEG-7
CSTs of the target image and user-generated image mosaics,
the Mosaic Test can automatically calculate the relevance
of retrieved images in a manner that correlates with human
perception, thus overcoming the second major drawback of
existing image retrieval evaluation methods for
benchmarking colour-based image retrieval systems (the reliance on a
highly subjective image database ground-truth).
      </p>
    </sec>
    <sec id="sec-9">
      <title>6. CONCLUSION</title>
      <p>
        Current image retrieval system evaluation methods have two
fundamental drawbacks that result in them being
unsuitable for evaluating and benchmarking colour-based image
retrieval systems. These evaluation strategies do not enable
users to perform the practise of re ection-in-action [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], in
which creative users assess project modi cations within the
context of the creative piece he/she is working on. The
existing image retrieval system evaluation methods also rely
heavily upon highly subjective image database ground-truths
when assessing the relevance of images selected by test users
or returned by a system. As a result of these drawbacks, no
method currently exists for reliably evaluating and
benchmarking colour-based image retrieval systems. In this paper,
we have introduced the Mosaic Test which has been
developed to address the current problem, by providing a reliable
means by which to evaluate colour-based image retrieval
systems.
      </p>
      <p>
        The ndings of a user study reveal that the Mosaic Test
overcomes the two major drawbacks associated with existing
evaluation method used in the research domain of image
retrieval. As well as also providing valuable e ectiveness data
relating to e ciency and user workload, the Mosaic Test
enables participants to re ect on the relevance of retrieved
images within the context of their image mosaic (i.e.,
perform re ection-in-action [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]). The Mosaic Test is also able
to automatically measure the relevance of retrieved images
in a manner which correlates with the perceptions of
multiple human assessors, by computing MPEG-7 colour
structure descriptors from the user-generated image mosaics and
their corresponding target images, and calculating the L1
(or city-block) distance between them. As a result of our
ndings, we propose that the Mosaic Test be adopted in all
future research evaluating the e ectiveness of colour-based
image retrieval systems. Future work will be to publicly
release the Mosaic Test Tool and procedural documentation
for other researchers in the domain of content-based image
retrieval.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Exalead</surname>
          </string-name>
          . Chromatik.
          <source>Accessed December 1</source>
          ,
          <year>2010</year>
          , at: http://chromatik.labs.exalead.com/.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Hart. NASA-Task Load</surname>
          </string-name>
          <article-title>Index (NASA-TLX); 20 Years Later</article-title>
          .
          <source>In Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting</source>
          , pages
          <volume>904</volume>
          {
          <fpage>908</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Zabih</surname>
          </string-name>
          .
          <article-title>Image Indexing Using Color Correlograms</article-title>
          .
          <source>In Computer Vision and Pattern Recognition</source>
          , pages
          <volume>762</volume>
          {
          <fpage>768</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Huiskes</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Lew</surname>
          </string-name>
          .
          <article-title>The MIR Flickr Retrieval Evaluation</article-title>
          .
          <source>In ACM International Conference on Multimedia Information Retrieval</source>
          , pages
          <volume>39</volume>
          {
          <fpage>43</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] idee Inc. idee MultiColr Search Lab</article-title>
          .
          <source>Accessed November 2</source>
          , 2010 at http://labs.ideeinc.com/multicolr.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Imagekind</given-names>
            <surname>Inc</surname>
          </string-name>
          .
          <article-title>Shop Art by Color</article-title>
          .
          <source>Accessed November 2</source>
          ,
          <year>2010</year>
          , at: http://www.imagekind.com/shop/ColorPicker.aspx.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Lau</surname>
          </string-name>
          and
          <string-name>
            <surname>I. King. Montage :</surname>
          </string-name>
          <article-title>An Image Database for the Fashion, Textile, and Clothing Industry in Hong Kong</article-title>
          .
          <source>In Third Asian Conference on Computer Vision</source>
          , pages
          <volume>410</volume>
          {
          <fpage>417</fpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          ller, W. Muller,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Squire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchand-Maillet</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Pun</surname>
          </string-name>
          .
          <article-title>Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals</article-title>
          .
          <source>Pattern Recognition Letters</source>
          ,
          <volume>22</volume>
          (
          <issue>5</issue>
          ):
          <volume>593</volume>
          {
          <fpage>601</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nakade</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Karule</surname>
          </string-name>
          . Mosaicture:
          <article-title>Image Mosaic Generating System Using CBIR Technique</article-title>
          .
          <source>In International Conference on Computational Intelligence and Multimedia Applications</source>
          , pages
          <volume>339</volume>
          {
          <fpage>343</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Picitup</surname>
          </string-name>
          . Picitup.
          <source>Accessed January 21</source>
          ,
          <year>2011</year>
          , at: http://www.picitup.com/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W.</given-names>
            <surname>Plant</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Schaefer</surname>
          </string-name>
          .
          <article-title>Evaluation and Benchmarking of Image Database Navigation Tools</article-title>
          . In International Conference on Image Processing,
          <source>Computer Vision, and Pattern Recognition</source>
          , pages
          <volume>248</volume>
          {
          <fpage>254</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Scho</surname>
          </string-name>
          <article-title>n. The Re ective Practitioner: How Professionals Think in Action</article-title>
          .
          <source>Basic Books</source>
          ,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Sikora</surname>
          </string-name>
          .
          <article-title>The MPEG-7 Visual Standard for Content Description - An Overview</article-title>
          .
          <source>IEEE Transactions on Circuits and Systems for Video Technology</source>
          ,
          <volume>11</volume>
          (
          <issue>6</issue>
          ),
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Silvers</surname>
          </string-name>
          . Photomosaics:
          <article-title>Putting Pictures in their Place</article-title>
          .
          <source>Master's thesis</source>
          , Massachusetts Institute of Technology,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>