<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cross-ratio Based Natural View Object Recognition for Mobile AR</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hyejin Kim</string-name>
          <email>hjinkim@gist.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sang-Goog Lee</string-name>
          <email>sg.lee@catholic.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Woontack Woo</string-name>
          <email>wwoo@gist.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>This research is supported by the Ubiquitous Computing and Network (UCN) Project, the Ministry of Information and Communication (MIC) 21st Century Frontier R&amp;D Program in Korea. Hyejin Kim is with the Gwangju Institute of Science and Technology</institution>
          ,
          <addr-line>Gwangju 500-712, S.</addr-line>
          <country country="KR">Korea (</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>[1] R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge University Press</institution>
          ,
          <addr-line>2003</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2007</year>
      </pub-date>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Inumber of smart objects will increase and various services</p>
      <p>N upcoming ubiquitous computing environment, the
will be hosted by these smart objects. Therefore, it is necessary
to show the information and services of smart objects in order
to aid users for an easy access. As a part of this purpose, the
vision-based AR (Augmented Reality) has played an important
role in the visual communication between smart environment
and users.</p>
      <p>In this paper, we propose a method to recognize smart
objects based on invariant cross-ratios for indoor mobile AR.
This method supports natural view recognition, longer distance
(2m~3m) recognition and improving running speed without
marker compared with fiducial marker based or local feature
based method.</p>
      <p>II. CROSS-RATIO BASED NATURAL VIEW OBJECT RECOGNITION</p>
    </sec>
    <sec id="sec-2">
      <title>A. The Overall Procedures</title>
      <p>Figure 1 shows the overall procedures of cross-ratio based
natural view object recognition. It consists of input images,
off-line steps, on-line steps and output information-id and other
information of the objects. Like input images, in home
environment, we can observe many rectangular shaped objects
such as TV, windows, audio and shelves. For this reason, we
re-define and use rectangular shapes of objects as natural view.</p>
      <p>The function of other steps is like following.</p>
      <p>1) Detection: We extract features which are quadrangle
shape similar to marker based method. At this time, we also can
take partial quadrangle shape in order to use three or four points
for cross-ratio calculation.</p>
      <p>2) Description: All quadrangle objects consist of the top and
bottom row line and the left and right column line. Therefore,
we make use of these straight lines from objects. Then, we
check the vanishing point or intersection of row lines in order
to know whether the view is on one-side or two-side wall. Next,
we calculate cross-ratio by using intersections of straight lines.</p>
      <p>3) The Off-line steps for Save as DB: We acquire cross-ratio
values for selected object and save it with direction and name
information into database.</p>
      <p>4) The On-line steps for Matching: If the view is one-side
wall, then we calculate cross-ratio values for each object just as
show the method in description step. However, if it is two-side
wall, we set the basis as boundary of wall and calculate
cross-ratio values between objects at each wall.</p>
    </sec>
    <sec id="sec-3">
      <title>B. Cross-ratio</title>
      <p>Cross-ratio means ratio of ratio of lengths which make up of
four sets of collinear points. And it is invariant under projective
geometry [1]. If four points-x1, x2, x3, x4- are given in order,
the cross-ratio is defined as equation (1). Also, if one of the
points has a zero entity, it lies at infinity and simply cancels the
terms containing the point. For instance, if the second point
have a zero entry, then x23 = x24 = ∞ . Thus, it cancels each
other and a result of cross-ratio is defined as equation (2).</p>
      <p>Cross( x1 , x2 , x3 , x4 ) =</p>
      <p>Cross( x1 , x2 , x3 , x4 ) =
x x 2 4
1 3 x x
(1)
(2)</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>