<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Realization of u-Contents: u-Realism, u-Mobility and u-Intelligence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kiyoung Kim</string-name>
          <email>kkim@gist.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dongpyo Hong</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Youngho Lee</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Woontack Woo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>This research is funded by ETRI OCR and the CTI development project of KOCCA, MCT in Korea. All are with GIST U-VR Lab.</institution>
          ,
          <addr-line>500-712, S.</addr-line>
          <country country="KR">Korea</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2007</year>
      </pub-date>
      <abstract>
        <p>-Recently the developments of Ubiquitous Computing and Augmented Reality technologies have made many changes on the existing applications. However, the contents used in the applications have been not revised conceptually. Thus, an expansive representation of the contents is required to adapt resources to new computing environments, and to reflect new emerging features such as realism, mobility and intelligence. In this paper, we address a noble concept: Ubiquitous Computing enabled Contents (u-Contents). Firstly, three key features, u-Realism, u-Mobility and u-Intelligence, are reviewed. Secondly, the realization issues are explained based on the properties of three key features. Lastly, u-Contents are discussed from the viewpoint of possible applications. Index Terms-Ubiquitous Computing, Augmented Reality, Ubiquitous Virtual Reality, Contents, and Mobility.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Iare distributed around users. The new environment enables
n Ubiquitous Computing Environment, computing resources
users to carry out impossible tasks as they can do in Virtual
Reality (VR) space. Most attractive users’ ability in VR is that
the contents in the space can be transformed according to users’
intentions and emotions. Recently, various researches of
Augmented Reality (AR) have shown possibility to realize this
ability of VR in real environment [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, the contents
used in the applications have been not revised conceptually.
Thus, an expansive representation of the contents is required to
adapt resources in new computing environments, and to reflect
new emerging features such as realism, mobility and
intelligence.
      </p>
      <p>
        In this paper, we address a noble concept: Ubiquitous
Computing enabled Contents (u-Contents). Unlike [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the
realization issues are focused. Firstly, three key features,
u-Realism, u-Mobility and u-Intelligence are reviewed and
revised. Secondly, the realization issues are explained based on
the properties of three key features with examples. Lastly,
u-Contents is discussed from the viewpoint of possible
applications.
u-Mobility
u-Contents
u-Mobility
u-Intelligence
u-Realism
(a) (b)
Figure 1. Representation of u-Contents and its three
properties: u-Realism, u-Mobility and u-Intelligence (a)
Venn diagram (b) 3D Space
y u-Realism is the property that contents are seamlessly
registered into physical space by reflecting contexts. Contents
with u-Realism provide realism suitable for users’ contexts
through multi-modal feedback based on users’ five senses.
y u-Mobility is the property that contents are selectively shared
among heterogeneous or homogeneous devices. Contents
with u-Mobility are able to freely move themselves among
devices such as large displays, PDA, and laptops.
y u-Intelligence is the property that contents respond
intelligently by themselves or user’s intention, attention, and
emotion. Contents with u-Intelligence act as alive agents so
that they provide more adaptive services to users.
      </p>
    </sec>
    <sec id="sec-2">
      <title>III. REALIZATION ISSUES</title>
      <p>In this chapter, the details of three key features in u-Contents
are discussed with concrete examples. Since contents used in
AR applications are most likely to be accepted as future
u-Contents, we deal the examples with AR contents.</p>
      <p>The main differences between conventional contents and
u-Contents are caused from the contexts obtained from physical
space. Most AR applications already have used the camera
context, but their utilizations are limited in camera pose
estimation. Various fusions can be generated to accelerate
existing algorithms by helps of context-awareness
technologies.</p>
      <p>Contents</p>
      <p>(a)</p>
      <p>User Context
Environment Context</p>
      <p>AR
Contents
u-Contents
(b)
Figure 2. u-Contents realization (a) 3D contents and images
are converted to AR contents by applying camera contexts
(b) u-Contents are obtained by using adequate user’s and
environmental contexts</p>
      <sec id="sec-2-1">
        <title>A. u-Realism</title>
        <p>The realism of contents is enhanced when all environmental
contexts are mapped to parameters in a mixed space. For
example, virtual shadows of the virtual objects as shown in
Figure 3 can be seen as realistic when the real light sources are
extracted from the environment correctly. In the existing AR
applications using only camera sensors, the generation of these
mapping is a challenging and has limitations due to a lack of
information.</p>
        <p>(a) (b)
Figure 3. Realism enhanced by a virtual shadow: (a) flower
augmentation without a shadow (b) the augmented shadow
based on the light source</p>
        <p>In u-Contents, from the viewpoint of the implementation, the
meaningful contexts are collected from a hundred of sensors
distributed in the real space. Then, the contexts are transformed
to the existing parameters in AR space. Table 1 shows the
context examples and how the contexts assist in realizing
u-Realism in the applications. Note that the five senses should
be considered in the ideal cases.
These technologies enable contents to move freely among
heterogeneous devices via network with predefined manners.
In addition, contents with u-Mobility can transform themselves
or move to other devices according to user’s contexts. This
property also includes the transfers among different spaces
such as VR to AR. Thus, u-Mobility adaptively yields a path to
the contents. Table 2 shows the examples where the user’s
contexts are required.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>User</title>
    </sec>
    <sec id="sec-4">
      <title>Environment</title>
    </sec>
    <sec id="sec-5">
      <title>User’s device specification</title>
    </sec>
    <sec id="sec-6">
      <title>Environmental resources</title>
    </sec>
    <sec id="sec-7">
      <title>Levels of contents are</title>
      <p>converted automatically.</p>
      <p>Network resources are
selected automatically to share
contents.</p>
      <sec id="sec-7-1">
        <title>C. u-Intelligence</title>
        <p>The research on the intelligent contents was started with the
questions, ‘Which actions are regarded as proper conducts?’
and ‘How the living subject can be imitated?’. The main
approach in u-Intelligence is to put ‘Personality’, ‘Emotion’
and ‘Sociality’ into contents with helps of context-awareness.
Figure 4 shows the low-level intelligence implemented with a
virtual robot character.</p>
        <p>(a) (b) (c)
Figure 4. Example of a low-level intelligence: 3D running
robot crashes with real blocks, this shows the simple
reaction against the ‘crash’ event</p>
        <p>In u-Contents, u-Intelligence has an important role in
generating reactions. Contents act by themselves with
self-motivations. However, the abundant sensor data is
essential to let the contents decide their behaviors. To
determine (or generate) responses, not only the user’s context,
but also the environmental context are combined and
parameterized.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>IV. DISCUSSION</title>
      <p>The proposed concept toward the realization of u-Contents is
going to be applied various AR applications. Especially,
u-Mobility is deeply related to the design and development of
the mobile AR services. And u-Realism can drastically reduce
the computational burdens, also enhance reality in all domains.
In addition, u-Intelligence will bring the mechanism for
seamless interactions in AR applications.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Suh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          , J.Han, and
          <string-name>
            <given-names>W.</given-names>
            <surname>Woo</surname>
          </string-name>
          ,
          <article-title>"Virtual Reality in Ubiquitous Computing Environment"</article-title>
          ,
          <source>International Symposium on Ubiquitous VR (ISUVR07)</source>
          , pp.
          <fpage>000</fpage>
          -
          <lpage>000</lpage>
          ,
          <year>2007</year>
          . (Submitted)
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>SJ.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Park</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Woo</surname>
          </string-name>
          ,
          <article-title>" u-Contents : New kinds of realistic contents in ubiquitous smart space”</article-title>
          ,
          <source>International Symposium on Ubiquitous VR (ISUVR06)</source>
          , Vol.
          <volume>191</volume>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>