<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enabling Advanced Context-Based Multimedia Interpretation Using Linked Data</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Ghent University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Multimedia Lab Gaston Crommenlaan 8 bus 201</institution>
          ,
          <addr-line>B-9050 Ledeberg-Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Current search technologies can only harness the ever increasing amount of multimedia data when su cient metadata exists. Several annotations are already available, yet they seldom cover all aspects. The generation of additional metadata proves costly; therefore e cient multimedia retrieval requires automated annotation methods. Current feature extraction algorithms are limited because they do not take context into account. In this article, we indicate how Linked Data can provide information that is vital to create an interpretation context. As a result, advanced interactions between algorithms, information and context will enable more advanced interpretation of multimedia data. Eventually, this will re ect in better search possibilities for the end user.</p>
      </abstract>
      <kwd-group>
        <kwd>Linked Data</kwd>
        <kwd>multimedia interpretation</kwd>
        <kwd>Semantic Web</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        A tremendous increase in multimedia content production and consumption
characterized the last decade and will continue to shape the Web for generations
to come. The biggest challenge is to serve consumers the content they need in
a convenient format and in a seemingly instantaneous way. The availability of
di erent search and browsing options (e.g., keyword search, faceted search, and
content-based search) enables e cient retrieval, yet these techniques require rich
metadata [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] in order to o er advanced retrieval operations.
      </p>
      <p>Metadata generation remains a tedious and often manual task that fortunately
can be assisted by automatic algorithms. However, these algorithms produce
inexact results with an often unknown reliability. In this paper, we argue that
Linked Data can play a crucial role in the annotation and interpretation of
multimedia data by assisting algorithms. Additionally, the results of this annotation
task can be published back to the Semantic Web, contributing to the growth
of the Linked Data Cloud. This feedback loop proves important for annotating
future data, as depicted in Fig. 1.</p>
      <p>The success rate of current search techniques strongly depends on the
availability of correct annotations, as illustrated below.
Multimedia</p>
      <p>Item</p>
      <p>Linked Data</p>
      <p>Cloud
Annotation</p>
      <p>Process</p>
      <p>Annotated
Multimedia</p>
      <p>
        Item
Keyword search Current search engines determine the content of an image by
that of the surrounding text, which is inadequate for two reasons:
1. Although mostly a strong correlation between image contents and surrounding
text exists, this is not always the case. Trivial examples account for this
(e.g., try searching for caption or alt), as well as more realistic examples (e.g.,
a search for two people names often also returns other people).
2. Even a strong correlation does not imply understanding of the image contents.
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] lists some of the categories the ambiguous query Washington yields:
persons, buildings, scenery, maps. . . In 2008, Google included the option to
narrow the search to faces [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] { in fact a form of faceted search { but this
does not solve the correlation problem nor help for the other categories.
Faceted search Faceted search or faceted browsing [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] enables searching
through images in a faceted classi cation, e.g. simultaneous classi cations in
multiple categories. Evidently, this classi cation in categories requires annotations
for each image, indicating its degree of membership to each of the categories.
Content-based search Content-based search techniques [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] measure the
similarity of indexed items to sample items. However, images with very high visual
similarity are for example unlikely to be of interest to the user, as he already
disposes of the sample image. Instead, images with similar content can be relevant
and could be explored with techniques such as faceted browsing, again requiring
annotations. Conversely, content-based search can also be used to extend the
other techniques, such as Google's \ nd similar images".
      </p>
      <p>Another major bene t of rich annotations is that they enable us to link
imagery of subjects (such as George Washington and the White House) to their
corresponding entities in the Linked Data Cloud, for example by using their
DBpedia identi ers. Therefore, related images can show up on any search that
results in entities from the Linked Data Cloud. That way, it would be possible to
automatically provide keyword and faceted search on images, using the combined
available knowledge about the entity.</p>
    </sec>
    <sec id="sec-2">
      <title>Available annotations</title>
      <p>Not all annotations have to be generated: some are already present upon
acquisition of the multimedia item, others are added by consumers or people in a social
network situation.
2.1</p>
      <sec id="sec-2-1">
        <title>Metadata upon acquisition</title>
        <p>
          The EXIF data format, describing metadata in digital photographs, is widely
known [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. It contains a lot of information that is mostly irrelevant for identifying
photograph contents, such as camera type and aperture. Some recent cameras
however, o er the option to include geographic coordinates in the EXIF data,
thanks to a GPS receiver. The rise of versatile mobile devices that serve as
phone, camera, and GPS navigator, have brought this into mainstream. These
coordinates can be translated afterwards into a named location (country, city or
even venue) and linked to the corresponding Linked Data entity. Furthermore, in
combination with the EXIF timestamp, this information can be used to link the
photograph to a particular event happening at that location and time [
          <xref ref-type="bibr" rid="ref1 ref10 ref7">1, 7, 10</xref>
          ].
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Consumer and social metadata</title>
        <p>
          Another common source of metadata, are annotations added by consumers upon
publication of their material. Moreover, in some social networks, users are
allowed to tag the content of others. Well-known examples are tags describing a
variety of topics (place, time, scenery, people. . . ). These tags however, su er from
the problem that they are informal. Therefore, several e orts to enhance their
quality exist [
          <xref ref-type="bibr" rid="ref3 ref5">3, 5</xref>
          ].
        </p>
        <p>Importantly, there is a growing tendency that stimulates users { perhaps
unknowingly { to provide more formal tags. As examples, we cite the practices of
person tagging, which creates a semantic link between a person and a photograph,
and place tagging of photographs. Modern user interfaces have rendered this task
intuitive for the majority of Internet users. Of course, the responsibility for the
accuracy of these tags lies with these (sometimes anonymous) users, which is
therefore arbitrary. It is tempting to assume that the formality of these tags is
an indication of their reliability, which is a fallacy. In fact, their ease of creation
can quickly lead to many formal but incorrect tags.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Production metadata</title>
        <p>
          In production environments, there is an increasing tendency to generate metadata
as part of the content creation process [
          <xref ref-type="bibr" rid="ref11 ref12 ref2">2, 11, 12</xref>
          ]. In each stage of the production
process, relevant metadata is appended to the stage's end product and during the
production process, metadata of previous stages can be reused. Ongoing research
in this area reveals how these metadata can be turned into annotations that drive
search applications.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Issues with contextless annotation</title>
      <p>Automated multimedia analysis research has accomplished several milestones over
the past decades in domains such as image feature extraction. While advanced
techniques for various tasks exist, such as face recognition algorithms, most of
them remain error-prone.</p>
      <p>
        Another problem is that the output of these algorithms is often not formalized.
For instance, a face recognition algorithm may recognize a certain face in an
image, but does not output an entity that is linked to its corresponding Linked
Data counterpart. A solution to this issue has been proposed in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], where RDF
is proposed as input and output for multimedia processing algorithms.
      </p>
      <p>Still, the main issue remains the missing ability of feature extraction algorithms
to collaborate on the annotation of a given multimedia item. Each algorithm is
highly specialized, which is both its greatest strength and its greatest weakness.
This degree of specialization comes indeed at the expense of losing an overview on
the item under annotation. Humans, in contrast, possess the remarkable ability to
shift rapidly between di erent levels of abstraction. This is why we can recognize
faces in context, while we are not able to do so without.</p>
      <p>Fig. 2 shows a clear example of a photograph of a face with and without
context. It is evident that no human and no algorithm now and in the future
will be able to recognize the person depicted in the photograph on the left with
an acceptable certainty, given no additional information. Our human ability to
unconsciously zoom in and out between di erent detail levels, provides us exactly
with this information, given the photograph on the right. Upon recognizing
one person as Hillary Clinton, we instantaneously realize the other person is
Bill Clinton with a fairly high certainty.</p>
      <p>
        This clearly the importance of context-awareness for feature extraction
algorithms, a task that is hard because of the involved complexity. After all, it
is impossible for algorithms to anticipate on all possible context parameters.
Therefore, we should look into platforms that are able to integrate algorithms
and context by means of knowledge and reasoning [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>Creating annotation context with Linked Open Data</title>
      <sec id="sec-4-1">
        <title>Knowledge-driven annotation</title>
        <p>Information that connects various concepts in di erent ways is readily available on
several semantic data sources, including DBpedia. We can imagine an annotation
platform consulting these data sources to retrieve information to either complete
or validate found results. The annotation process could in fact entirely take place
on the Semantic Web and consist of:
{ general and speci c knowledge from the Linked Data Cloud;
{ intelligent services such as feature extraction algorithms;
{ a Semantic Web reasoner applying rule-based knowledge to entities.
The general knowledge could drive the process, deriving concrete knowledge about
a speci c multimedia item.</p>
        <p>
          Linked Data knowledge On the one hand, automated interpretation requires
general knowledge about annotations, consisting of both ontologic and rule-based
knowledge. In the case of images, an example of the former is \images can contain
regions that depict a face", and an example of the latter \if an image contains
a region that depicts a person's face, then the image depicts the person". This
general knowledge helps the platform decompose the task of interpreting an image
into smaller subtasks. On the other hand, speci c knowledge about concrete topics
is essential to provide a context for the item under interpretation. Their presence
proves vital for complex reasoning schemes. Examples include ontological and
instance-based knowledge about people and interpersonal relations.
Intelligent services Automated annotation algorithms could interact with
Semantic Web knowledge if we approach algorithms as Semantic Web services [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
We can describe algorithms as regular Web services and invoke them similarly.
Each input and output parameter thus gains semantic value, which enables us to
form complex service compositions. Together with general and speci c Linked
Data knowledge, they enable diverse intelligent interactions with the context.
        </p>
        <p>As an example, we consider a face recognition algorithm that operates on
Fig. 2. The visible features of Bill Clinton alone will be insu cient to achieve
successful recognition. The nearby presence of Hillary Clinton could substantiate
the assumption that the other person relates to her in some way. A search on
the Linked Data Cloud for relatives, friends and co-workers would yield a list
of possibilities which we can pass in a semantic way to the recognition service.
As a result, the algorithm could take these suggestions into consideration and
limit its search space, signi cantly increasing the odds of arriving at the correct
conclusion. We could then further cross-check the solution by checking with
available metadata, such as GPS location and time, and maybe retrieve the event
at which the photograph was taken.
Reasoning Semantic Web reasoning plays a crucial role in this application
domain. In the above example, we intuitively assumed that the simultaneous
depiction of two persons implies some kind of connection between them. This
knowledge, which can be derived for instance by statistic methods, needs to
be available formally. A reasoner is then able to instantiate this knowledge on
concrete entities, e.g., to retrieve people connected to Hillary Clinton.</p>
        <p>An important topic here is dealing with imperfections. Most data in the
Linked Data Cloud are represented as absolute facts without association about
possible vagueness or uncertainty. While this is sometimes applicable and even
desired, automated interpretation inherently needs to deal with predictions and
assumptions. As a consequence, these imperfections need to propagate when
reasoning on intermediate data.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Feedback to the Linked Data Cloud</title>
        <p>Furthermore, results from an annotation process can also be pushed back to the
Linked Data Cloud. As indicated, multimedia searches are an interesting and
important application. Feedback enables other useful applications as well, e.g.:
{ New data could be inferenced based on the generated annotations. For
example, the fact that several people were at a given place at a given time
could indicate their attendance of an event.
{ Feedback data could enhance future annotation tasks, for example by means
of statistics of which people appear together frequently.</p>
        <p>In general, feedback will potentially link many di erent concepts, contributing to
the coherence of the Linked Data Cloud while at the same time complementing
it with multimedia data.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Platform model</title>
        <p>Fig. 3. Use of linked data in a context-aware interpretation process</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Concluding remarks and future work</title>
      <p>In this article, we outlined how Linked Data forms an cornerstone of a
contextbased multimedia interpretation process. The incorporation of Linked Data
creates a holistic view that integrates the di erent aspects of annotating. Several
important topics for future research emerge, including:
{ the interaction of feature extraction algorithms with Linked Data;
{ the representation of uncertainties associated with multimedia interpretation;
{ the feedback of the interpretation process towards the Linked Data Cloud.
All these topics illustrate the enormous potential of Linked Data for advanced
automated multimedia interpretation.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The research activities as described in this paper were funded by Ghent University,
the Interdisciplinary Institute for Broadband Technology (IBBT), the Institute for
the Promotion of Innovation by Science and Technology in Flanders (IWT), the
Fund for Scienti c Research Flanders (FWO-Flanders), and the European Union.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oakes</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tait</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A location data annotation system for personal photograph collections: Evaluation of a searching and browsing tool</article-title>
          .
          <source>In: International Workshop on Content-Based Multimedia Indexing</source>
          ,
          <year>2008</year>
          .
          <source>CBMI</source>
          <year>2008</year>
          . (Jun
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Debevere</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Deursen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Rijsselbergen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mannens</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matton</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De</surname>
            <given-names>Sutter</given-names>
          </string-name>
          , R., Van de Walle, R.:
          <article-title>Enabling Semantic Search in a News Production Environment</article-title>
          .
          <source>Proceedings of the 5th International Conference on Semantic and Digital Media Technologies (SAMT</source>
          <year>2010</year>
          ) (
          <year>Dec 2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Neve</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>N</given-names>
            <surname>Plataniotis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Man Ro</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          :
          <article-title>MAP-based image tag recommendation using a visual folksonomy</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>31</volume>
          (
          <issue>9</issue>
          ),
          <volume>976</volume>
          {982 (Jan
          <year>2010</year>
          ), http://dx.doi.org/10.1016/j.patrec.
          <year>2009</year>
          .
          <volume>12</volume>
          .024
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>O</given-names>
            <surname>'Malley</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>New search-by-style options for Google Image Search</article-title>
          .
          <source>Ofcial Google Blog (Dec</source>
          <year>2008</year>
          ), http://googleblog.blogspot.com/
          <year>2008</year>
          /12/ new-search
          <article-title>-by-style-options-for-google</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Overell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Sigurbjornsson, B.,
          <string-name>
            <surname>Van</surname>
            <given-names>Zwol</given-names>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          :
          <article-title>Classifying tags using open content resources</article-title>
          .
          <source>Proceedings of the Second ACM International Conference on Web Search and Data</source>
          Mining pp.
          <volume>64</volume>
          {
          <issue>73</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Rahurkar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsai</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dagli</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Image Interpretation Using Large Corpus: Wikipedia</article-title>
          .
          <source>Proceedings of the IEEE</source>
          <volume>98</volume>
          (
          <issue>8</issue>
          ),
          <volume>1509</volume>
          {
          <fpage>1525</fpage>
          (
          <year>2010</year>
          ), http: //ieeexplore.ieee.org/xpl/freeabs_all.
          <source>jsp?arnumber=5484723</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Sarin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagahashi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miyosawa</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kameyama</surname>
          </string-name>
          , W.:
          <article-title>On automatic contextual metadata generation for personal digital photographs</article-title>
          .
          <source>In: The 9th International Conference on Advanced Communication Technology (Feb</source>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schirling</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Metadata standards roundup</article-title>
          .
          <source>IEEE MultiMedia (Jan</source>
          <year>2006</year>
          ), http://www.computer.org/portal/web/csdl/doi/10.1109/MMUL.
          <year>2006</year>
          .34
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Tesic</surname>
          </string-name>
          , J.:
          <article-title>Metadata practices for consumer photos</article-title>
          .
          <source>IEEE MultiMedia (Jan</source>
          <year>2005</year>
          ), http://ieeexplore.ieee.org/xpls/abs_all.
          <source>jsp?arnumber=1490501</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Troncy</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malocha</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fialho</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Linking events with media</article-title>
          .
          <source>Proceedings of the 6th International Conference on Semantic Systems (Jan</source>
          <year>2010</year>
          ), http://portal. acm.org/citation.cfm?id=
          <fpage>1839759</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Van Rijsselbergen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Van De Keer,
          <string-name>
            <surname>B.</surname>
          </string-name>
          :
          <article-title>Movie script markup language</article-title>
          .
          <source>Proceedings of the 9th ACM symposium on Document engineerin (Jan</source>
          <year>2009</year>
          ), http://portal. acm.org/citation.cfm?id=
          <volume>1600193</volume>
          .
          <fpage>1600231</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Van Rijsselbergen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verwaest</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van De Keer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Van de Walle, R.:
          <article-title>Introducing the Data Model for a Centralized Drama Production System</article-title>
          .
          <source>IEEE International Conference on Multimedia and Expo</source>
          ,
          <source>2007 (Jan</source>
          <year>2007</year>
          ), http://ieeexplore.ieee. org/xpls/abs_all.
          <source>jsp?arnumber=4284725</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Veltkamp</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tanase</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A survey of content-based image retrieval systems. Content-based image and video retrieval (</article-title>
          <year>Jan 2002</year>
          ), http://www.cs.uu.nl/groups/ MG/multimedia/publications/art/socbirs02.pdf
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Verborgh</surname>
          </string-name>
          , R.,
          <string-name>
            <surname>Van Deursen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Roo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mannens</surname>
          </string-name>
          , E., Van de Walle, R.:
          <article-title>SPARQL Endpoints as Front-end for Multimedia Processing Algorithms</article-title>
          .
          <source>In: Proceedings of the Fourth International Workshop on Service Matchmaking and Resource Retrieval in the Semantic Web at the 9th International Semantic Web Conference (ISWC</source>
          <year>2010</year>
          ) (
          <year>Nov 2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Verborgh</surname>
          </string-name>
          , R.,
          <string-name>
            <surname>Van Deursen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mannens</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poppe</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Van de Walle, R.:
          <article-title>Enabling Context-aware Multimedia Annotation by a Novel Generic Semantic ProblemSolving Platform</article-title>
          .
          <article-title>Multimedia Tools and Applications special issue on Multimedia and Semantic Technologies for Future Computing Environments (</article-title>
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Yee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Swearingen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hearst</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Faceted metadata for image search and browsing</article-title>
          .
          <source>Proceedings of the Special Interest Group on Computer{Human Interaction (Jan</source>
          <year>2003</year>
          ), http://portal.acm.org/citation.cfm?id=
          <volume>642611</volume>
          .
          <fpage>642681</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>