<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>e-Learning Media Format for Enhanced Consumption on Mobile Application</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sihyoung Lee</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Seungji Yang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yong Man Ro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hyoung Joong Kim</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Image and Video Systems Lab., Information and Communications University (ICU)</institution>
          ,
          <addr-line>Munjiro 119, Yuseong, Daejeon</addr-line>
          ,
          <country country="KR">South Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Information Security, Korea University</institution>
          ,
          <addr-line>Anam-dong, Seongbuk-Gu, Seoul</addr-line>
          ,
          <country country="KR">South Korea</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As the use of internet based learning has seen a significant increase over the last decade, the topic of new model, methods and tools to support eLearning have been significant over the last ten years. In this paper, a multimedia application format (MAF) for e-Learning is proposed. The eLearning MAF shall meet various requirements which enable users to facilitate search, evaluation, acquisition and use of e-Learning contents in ubiquitous environments. To meet these requirements, we design an ISO Base Media File Format file structure and associated MPEG-7 e-Learning metadata suit for eLearning. In particular, MPEG-7 is used for e-Learning metadata so that it could allow users to consume e-Learning MAF contents with enhanced functionalities such as easy and fast navigation using content-based retrieval. To guarantee interoperability with other e-Learning formats, spaces that are able to contain the metadata of other formats are prepared. We implemented the proposed system on top of mobile device and showed the usability of the eLearning MAF format and MPEG-7 e-Learning metadata. Furthermore, a prototype system to encode and decode e-Learning MAF is realized on top of mobile device.</p>
      </abstract>
      <kwd-group>
        <kwd>e-Learning</kwd>
        <kwd>MAF</kwd>
        <kwd>MPEG-7</kwd>
        <kwd>e-Learning metadata</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>As the use of the Internet keeps rising every year, digital multimedia contents also
become abundant. In particular, to facilitate access to knowledge and to meet the
needs of lifelong learning, on those digital environments, e-Learning has been a good
alternative to conventional learning paradigms. As Learning environments become
ubiquitous, e-Learning methods should also be changed toward being portable,
flexible and adaptive.</p>
      <p>
        In recent years, much research has been done on e-Learning tools and products
with different pedagogical models and target audiences. Most of the researches
concentrate on e-Learning metadata, content packaging and prototype system to
manage e-Learning contents. For example, sharable content object reference model
(SCORM) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] was proposed in advanced distributed learning (ADL), which
is a collection of standards and description for managing learning objects, and making
them portable from one learning management system to another. SCORM adopts
metadata elements from different e-Learning standards groups. Known as IMS
learning resource metadata, IEEE learning object metadata (LOM) developed by
IEEE learning technology standard committee (LTSC) is both elaborate and general
in character containing a broad range of elements [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. CanCore was developed by a
group of national and provincial educators and technology developers [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
CanCore is based on fully compatible with the IMS Learning Resource Metadata
Information Model. Education network Australia (EdNA) Metadata standard [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is
conducted by EdNA Metadata Standard Working Group. The purpose of the EdNA
Metadata is to support interoperability across all sectors of education and training in
Australia in the area of online resource discovery and management. Gateway to
Educational Materials (GEM) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] has created a metadata element set based on Dublin
Core [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] with the addition of education-specific elements.
      </p>
      <p>However, several problems still remain challenging issues in many conventional
eLearning methods as follows.</p>
      <p>1) Although e-Learning usually requires effective association and interaction of
eLearning resources and their metadata, there has been minimal research regarding the
association and interaction.</p>
      <p>2) e-Learning contents are typically composed of single media resource such as a
single video, audio or text. If the e-Learning content needs to load multi-modal
resources for effective learning, the multi-modal resources are hardly associated with
the other resources. In particular, e-Learning contents with several multi-modal
resources are often too big to be consumed in portable devices due to their limited
storage or computational power.</p>
      <p>3) Existing e-Learning contents have their own file formats and related metadata
format dedicated to their specific applications. The absence of a standardized file
format for containing media resources and e-Learning metadata often causes limited
use of the e-Learning contents on different devices.</p>
      <p>This paper presents an file structure for e-Learning. It could allow an e-Learning
content to take both media resources and metadata. Also, it describes metadata set for
e-Learing to allow users to consume e-Learning contents more efficient.</p>
      <p>This paper is organized as follows. Section 2 describes about MAF which is matrix
of e-Learning MAF. e-Learning scenarios and MPEG-7 e-Learning metadata and
eLearning MAF is presented in Section 3. Utility of e-Learning MAF is presented in
Section 4. Conclusion is given in Section 5.
2</p>
      <p>MAF
MPEG-A, or the ISO/IEC 23000 standard, is recently added to the well-known
standards developed by moving picture experts group (MPEG). MPEG-A aims to
facilitate the swift development of innovative, standards-based multimedia
applications and services for interoperable and augmented use of extensively
prevailed multimedia data such as MPEG-2, MPEG-4, MP3 and JPEG. To meet this
goal, the MPEG-A standard specifies MAFs.
ISO file</p>
      <p>Movie data
… other boxes
trak(video)
trak(audio)</p>
      <p>Media data
Interleaved, time-ordered,
video and audio frames</p>
      <p>
        MAF offers selective borrowing current technologies not only MPEG standards,
but also non-MPEG standards such as JPEG and JPEG2000. MAF presents a
framework combining them into a single specification with relative metadata. A
standard should specify the principle as little as possible while guaranteeing
maximum interoperability. For this, MAF specifies how to combine metadata with
media resources. MAF is derived from the ISO Base Media File Format [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The
ISO Base Media File Format proffers an efficient, flexible and extendible method to
combine media resources. The basic file structure of the ISO Base Media Format is
object-oriented and can be decomposed into a continuous object named Box, and all
media data subsist in Box.
      </p>
      <p>ftyp
meta
moov</p>
      <p>mdat
meta
trak</p>
      <p>meta
We first consider useful scenarios to discover requirements about e-Learning MAF.
Suppose that there are two different users who consume e-Learning MAF on PC and
any mobile device such as PDA or cellular phone. If users want to consume an
eLearning MAF for their specific purpose, service providers could be expected to
provide the best or desirable selection of many e-Learning contents for individual
users. The PC users typically have better consuming condition than mobile users
because network bandwidth and processing capability are limited. Users could start
consuming a specific content after searching process. There should be a verifying
process before consuming the content because a searching process recommends
several contents about all queries the user requested. In the mobile environment,
however, it is hard to verify all recommended contents because of network bandwidth
and processing capability limitation such as computation power and memory size.
Therefore, in the case of mobile application, a light weight e-Learning MAF just
containing metadata about the content and reference identifiers of the media resources
could be useful. It may help the mobile user choose desirable content without any
time-consuming work to verify un-concerned contents because the embodied
metadata could provide useful information about identifying content itself such as
creator, lecturer, subject, abstract, etc.. The media resources required to consume the
e-Learning MAF could be perfectly obtained by downloading or streaming them
realtime. Consequently, the mobile user could consume the desirable e-Learning MAF
entirely in spite of the limitations.
3.2</p>
      <sec id="sec-1-1">
        <title>MPEG-7 e-Learning metadata</title>
        <p>The importance of metadata for e-Learning is in terms of the Semantic Web. Based on
e-Learning metadata, e-Learning repositories can offer effective search for e-Learning
contents. Currently several widely used e-Learning metadata exist including CanCore,
IMS Learning Resource Meta-data, IEEE LOM and GEM metadata. CanCore is
educational metadata, and it is fully compatible with the IMS Learning Resource
Metadata Information Model. CanCore has defined a sub-set of elements from IMS
model for the purposes of the efficient and uniform description of digital educational
resources. It is intended to facilitate effective interchange among learning objects. The
IMS Global Learning Consortium developed IMS Learning Resource Meta-data, but
it is being aligned with LOM. The schema had been superseded by the LOM. LOM,
most widely used metadata for e-Learning, is defined by IEEE LTSC. LOM outlines
the minimal set of attributes needed to allow learning objects (either digital or
nondigital) to be managed, located and evaluated. It is based on DC metadata. GEM
metadata is based on DC metadata with addition of education-specific elements. It
consists of 8 GEM elements and 13 DC elements. We could intuitively understand the
relation among e-Learning metadata through Fig. 3. Fig. 3 presents that DC metadata
is the basic metadata for e-Learning because all metadata for e-Learning widely used
are based on DC metadata. DC metadata is an international standard for cross-domain
information resource description. The DC metadata have 15 descriptors that resulted
from effort in interdisciplinary and international consensus building. It is intended to
co-exist with metadata standards that offer other semantics. The 15 elements may
appear in any order, and each element is optional and repeatable.</p>
        <p>
          Most existing e-Learning metadata concentrate on effective and efficient search for
contents. Although existing e-Learning metadata are enough to describe content itself
efficiently search, there should be additional information to meet the users’
requirement about content-based retrieval. Because existing metadata for e-Learning
could not support content-based retrieval, we define MPEG-7 e-Learning metadata
which can describe not only information about content itself for efficient searching
and acquisition but also semantic description for content-based retrieval. Proposed
MPEG-7 e-Learning metadata is based on DC metadata for effective and efficient
searching and acquisition of the e-Leaning content like existing e-Learning metadata.
Moreover, it also describes semantic information about the content for content-based
retrieval. MPEG-7 e-Learning metadata is described by MPEG-7 MDS [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The
MPEG-7 MDS is used to describe and annotate multimedia data. Because MPEG-7 is
a standard for describing the multimedia content data that supports some degree of
interpretation of the information meaning, which can be easily accessed by a device
or a computer. MPEG-7 is not aimed at any one application in particular. The
elements that MPEG-7 standardizes support as broad a range of applications as
possible. MPEG-7 is a more systematic and well-structured description than other
metadata for e-Learning. Only MPEG-7 could describe not only temporal segment
structure but also hierarchical decomposition of multimedia contents. It has some
advantages about accessing the elements and navigating them.
        </p>
        <p>LOM</p>
        <p>CanCore
GEM</p>
        <p>DC</p>
        <p>Because MPEG-7 e-Learning metadata is based on DC metadata, there should be
relation between them. As shown in Table 1, MPEG-7 could describe all DC metadata
elements. In addition to DC metadata, MPEG-7 e-Learning metadata specifies
semantic information about the content. Users could consume the e-Learning content
more efficiently by using the semantic information which enables users to access and
navigate specific theme or sub-content in the e-Learning content.</p>
        <p>Table 2 illustrates the organization of the proposed MPEG-7 e-Learning metadata
for collection-level description. The collection-level description describes e-Learning
content collection which contains not only multiple media resources but also plural
contents on it. In the proposed MPEG-7 e-Learning metadata, collection-level
description is represented by Creation &amp; production, Media description, Usage
description and Content collection.</p>
        <p>Table 3 shows the organization of the MPEG-7 e-Learning metadata for item-level
description. The item-level description describes information about one e-Learning
resource that belongs to one or more collections. As the item-level description can
describe collections in one e-Learning resource, it enables users to access and
navigate collections more efficient. Therefore, we propose the item-level description
which consists of structure description, navigation and access description, creation
and production description, media description and usage description. The structure
description has hierarchical structures which specify the resources in terms of
spatiotemporal segments. The navigation and access description is the specification of
summaries, partitions and decompositions, and variation of the multimedia content for
facilitating browsing and retrieval.</p>
        <p>Mapping to MPEG-7 MDS</p>
        <p>The e-Learning contents are sequential media. According to the passing of time,
the e-Learning content is divided into several sub-contents. The item-level description
describes information about groups and segments. As the item-level description can
describe collections in one e-Learning resource, it enables users to access and
navigate collections more efficient.</p>
        <p>As shown in figure 4, an e-Learning content is built from temporal segments of the
media data. The temporal segments are grouped into a group, so the e-Learning
content is organized into several groups.</p>
        <p>e-Learning content
Group 1</p>
        <p>Group 2
media resource</p>
        <p>Segment 1</p>
        <p>Segment 2</p>
        <p>Segment 3</p>
        <p>Segment 4</p>
        <p>Segment 5 Segment 6
t</p>
      </sec>
      <sec id="sec-1-2">
        <title>3.3 e-Learning MAF</title>
        <p>Fig. 5 illustrates the structure of e-Learning MAF. Because the unified framework
converts media resource files such as avi, mp3, JPEG into a unified e-Leaning file,
eLearning resource management systems could control the e-Learning files more
efficiently. Information about media resources such as video, audio, image and text
required by e-Learning content resides at trak in moov, and actual media resource data
is in mdat. e-Learning metadata could allow users to consume e-Learning contents
with enhanced functionalities such as easy and fast navigation using content-based
retrieval. Proposed MPEG-7 e-Learning metadata locates at meta in moov, and
additional spaces for contain existing e-Leaning metadata could locate in additional
trak. Because of the additional space, proposed e-Learning MAF guarantees
interoperability with other metadata for e-Learning. If a user has LOM metadata
decoder, the user could consume LOM metadata which is located in trak.</p>
        <p>Scene description information should present how the various objects are located
in space and time. Because it is impossible to refer to the information from particular
media elementary streams directly, object descriptors should identify each object and
separate the scene description from encoded objects. This allows changing media data
without any amendment of the scene description.</p>
        <p>
          MPEG-4 standardized the scene description which supports complete freedom in
modifying itself through scene update. Binary format for scenes (BIFS) [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], scene
description language for MPEG-4, offers dynamic scene behavior and user interaction.
Because BIFS makes it possible to not only synchronize among media resources but
also represent interaction between users and e-Learning contents, e-Learning MAF
employs BIFS to render media resources. BIFS is a binary format so there should be a
compiler to translate the scene from text to binary.
        </p>
        <p>ftyp
moov</p>
        <p>mdat
meta
xml</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4 Experiments</title>
      <p>In this paper, we presented a unified file structure for e-Learning. It allows an
eLearning content to contain both media resources and metadata in one file. Also, the
paper describes metadata for e-Learning to enables users to consume e-Learning
contents efficiently.</p>
      <p>To verify the effectiveness and efficiency of the e-Learning MAF, we implement
encoding and decoding system for e-Learning MAF. Especially, the decoding system
is realized on top of the mobile device. Fig. 6 illustrates the encoding and decoding
system. During encoding process, information about media resources is held by trak
respectively. MPEG-7 e-Learning metadata is described in MPEG-7 MDS after
extracting metadata from multimedia resources. The MPEG-7 e-Learning metadata
contains information about the media sources, content itself and content semantic
information. Additional e-Learning metadata also could exist in e-Learning MAF at
prepared space. Scene Descriptor describes time information about when the specific
resources are to be rendered and information about interaction with users.</p>
      <p>The e-Learning MAF generated by the encoder could be consumed by the decoder.
ISO base atom parser extracts media data, MPEG-7 e-Learning metadata and
rendering data. It also could extract existing e-Learning metadata when e-Learning
MAF contains it. After extracting MPEG-7 e-Learning metadata, MPEG-7 parser
analyzes the metadata. When users request content-based retrieval, the analyzed
information is employed. The extracted media resources are rendered by BIFS
decoder through e-Learning MAF player.</p>
      <p>Video with v
additional media dei</p>
      <p>resources
Other resources o
e-Learning
metadata for
media resources
e-Learning
metadata for
video resources
Scene Descriptor</p>
      <p>Encoder</p>
      <p>Main video
content
Additional media</p>
      <p>data</p>
      <p>An e-Learning MAF file was created by MP4Box offered by GPAC. The
eLearning MAF file consists of one MP3 file, several JPEG images and MPEG-7
eLearning metadata. Because BIFS is a binary format, XMT-A format is used to
describe scene description. During the playing of a audio resource, image 1 is
displayed for 10 seconds, and image 2 is displayed for next 6 seconds and image 3 is
displayed for next 20 seconds. meta was generated at trak-level, and e-Learning
metadata which was packed by xml occupies meta. The e-Learning MAF file created
by encoding system was consumed through modified Osmo4 player in mobile
environment. As shown is Fig. 7, the player renders MP3 and JPEG images as desired
time. MPEG-7 e-Learning metadata could support not only efficient searching and
acquisition of e-Learning content but also content-based retrieval. Fig. 8 shows
enhanced functionalities such as sequential access and theme-level access using
MPEG-7 e-Learning metadata. Sequential access enables users to navigate and
consume specific sub-content in e-Learning content, and users also could navigate
and consume specific theme that users want to consume. Whenever users consume the
e-Learning MAF, users could make effective use of those sequential access and
theme-level access.</p>
      <sec id="sec-2-1">
        <title>a) Sequential Access</title>
      </sec>
      <sec id="sec-2-2">
        <title>b) Theme-level Access</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5 Conclusion</title>
      <p>In this paper, we have proposed a framework of MAF for e-Learning and defined
MPEG-7 e-Learning metadata based on DC metadata and MPEG-7 MDS. The
proposed e-Learning MAF facilitates the flexible augmented use of e-Learning
contents in ubiquitous environments. Moreover, it guarantees interoperability with
existing e-Learning metadata, such as LOM, CanCore and GEM by providing
additional spaces to contain them. Proposed MPEG-7 e-Learning metadata locates at
meta in moov, and additional spaces for contain existing e-Leaning metadata could
locate in the additional spaces independently. Because of the additional space,
proposed e-Learning MAF guarantees interoperability with other metadata for
eLearning. With MPEG-7 e-Learning metadata, users could search and acquire
eLearning contents more efficiently and effectively, it also enables users to consume
the contents with content-based retrieval which could be described by MPEG-7 MDS.
Because MPEG-7 is more a systematic and well-structured description than any other
metadata for e-Learning, there are some advantages about accessing the elements and
decoding them by describing MPEG-7 e-Learning metadata in MPEG-7. The
prototype system to encode and decode e-Learning MAF is realized on top of the
mobile device. Further studies on objective verification about the utility of MPEG-7
e-Learning metadata are needed.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>P.</given-names>
            <surname>Arapi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Moumoutzis</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Christodoulakis: Supporting interoperability in an existing elearning platform using SCORM</article-title>
          .
          <source>IEEE Int. Conference on Advanced Learning Technologies</source>
          . (
          <year>2003</year>
          )
          <fpage>388</fpage>
          -
          <lpage>389</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Shih</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>N.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsuan-Pu</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuan-Hao</surname>
            <given-names>Huang</given-names>
          </string-name>
          :
          <article-title>Adaptive pocket SCORM reader</article-title>
          .
          <source>IEEE Int. Conference on Multimedia and Expo</source>
          . (
          <year>2004</year>
          )
          <fpage>27</fpage>
          -
          <lpage>30</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Simoes</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luis</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horta</surname>
          </string-name>
          , N.:
          <article-title>Enhancing the SCORM modelling scope</article-title>
          .
          <source>IEEE Int. Conference on Advanced Learning Technologies</source>
          , (
          <year>2004</year>
          )
          <fpage>880</fpage>
          -
          <lpage>881</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Learning Technology Standards Committee:
          <article-title>Draft Standard for Learning Object Metadata</article-title>
          , IEEE
          <volume>1484</volume>
          .
          <year>12</year>
          .
          <fpage>1</fpage>
          -
          <lpage>2002</lpage>
          . (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>N.</given-names>
            <surname>Friesen</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <article-title>McGreal: CanCore: best practice for learning object metadata in ubiquitous computing environments</article-title>
          .
          <source>IEEE Int. Conference on Pervasive Computing and Communications Workshops</source>
          . (
          <year>2005</year>
          )
          <fpage>317</fpage>
          -
          <lpage>321</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>McGreal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Implementing learning object metadata for mobile devices using CanCore</article-title>
          .
          <source>Int. Conference on Internet and Web Applications and Services</source>
          . (
          <year>2006</year>
          )
          <fpage>5</fpage>
          -
          <lpage>5</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>EdNA</surname>
          </string-name>
          web-site, http:www.edna.edu.au
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Nancy</surname>
            <given-names>V.M.:</given-names>
          </string-name>
          <article-title>An Overview of Metadata for E-Learning, focusing on the Gateway to Educational Materials and activities of the Dublin Core Education Working Group</article-title>
          .
          <article-title>Symposium of Applications and the Internet Workshops</article-title>
          . (
          <year>2003</year>
          )
          <fpage>399</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. ISO/TC 46:
          <article-title>ISO 15836 - The Dublin Core metadata element set 2003</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. ISO/IEC 14496-12: Information technology -
          <source>Coding of audiovisual object - Part 12: ISO Base Media File Format</source>
          . (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Manjunath</surname>
          </string-name>
          , Philippe Salembier, and Tomas Sikora:
          <article-title>Introduction to MPEG-7</article-title>
          . WIELY. (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>J. Signes</surname>
            , Y. Fisher,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Eleftheriadis: MPEG-4's Binary Format for Scene Description</article-title>
          . Signal Processing: Image Communication,
          <source>Special issue on MPEG-4</source>
          , Vol.
          <volume>15</volume>
          . (
          <year>2000</year>
          )
          <fpage>321</fpage>
          -
          <lpage>345</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>