=Paper=
{{Paper
|id=Vol-260/paper-13
|storemode=property
|title=Context-assisted Tracking for Dynamic Target Augmentation
|pdfUrl=https://ceur-ws.org/Vol-260/paper13.pdf
|volume=Vol-260
|dblpUrl=https://dblp.org/rec/conf/isuvr/ParkW07
}}
==Context-assisted Tracking for Dynamic Target Augmentation==
International Symposium on Ubiquitous VR 2007 1
Context-assisted Tracking for
Dynamic Target Augmentation
Youngmin Park and Woontack Woo
B. Description Hierarchy
I. INTRODUCTION The description hierarchy contains spatial division, several
target objects belonging to each space, and temporal
C ONTEXT-AWARENESS in computer vision is expected to
bring improvements of computation efficiency by filtering
out search targets based on context. The problem is to analyze
information. A wide space and target objects in it can be
divided into and described as a tree hierarchy. In the lowest
level of the hierarchy, there are specific target objects. Then,
which contexts are useful for a given problem and determine temporal context places in each target object to describe the
how to exploit them. state of each.
Previous researches showed that spatial context is used
effectively. One famous application is to use the proximity of a C. Tracking Changing Target
moving camera to exhibitions in a museum [1]. In that For the implementation, we used a library which supports
application, only the targets those are near to the camera is real-time detection of a textured surface having natural features
considered in the recognition. The effect is that the recognition [4]. It works in real-time for a single static surface based on
is performed much faster and the resulting accuracy is off-line training. However, it might not applicable to detect a
improved. Another application of spatial context is to re-use a video sequence on a surface since it requires matching with
decades or hundreds of frames according to the length of the
same marker in different and separated space [2]. By
video.
differentiating same markers in a different space, markers can
The spatial and temporal contexts are exploited as filtering
be re-used several times. This reduces the efforts to generate
criteria so that most matching candidate is excluded.
numerous markers and prevents misrecognition arising from
Depending on spatial context, at first, only targets within a
similar markers.
specific range is selected. Orientation difference between the
We propose a novel method of using temporal context for
user and the target is also applied because the user is not likely
vision-based tracking. Temporal context describes the state of a
to look at an object behind. Then, the temporal context support
dynamically changing target and gives a pre-knowledge of the
to estimate which video frame is displayed at each moment.
appearance of the target. In that way, the temporal context
Thus, matching target is narrowed down to few frames from
works as a detailed filter after that of spatial. Possible
decades or hundreds frames.
applications include tracking and augmenting a video
advertisement on a screen.
III. CONCLUSION
II. CONTEXT-ASSISTED TRACKING A novel method which uses temporal as well as spatial
context for vision-based tracking is presented. The proposed
A. Context Acquisition method is efficient when the target is not static but dynamically
We assume that the spatial context is obtained with user changing its appearance along time. Possible applications
positioning systems and the temporal context is generated by include tracking and augmenting a video advertisement on a
the computer embedded in the target. For example, the target screen.
object can display a video sequence and generate timestamp at
each frame. The contexts can be shared between the player and REFERENCES
tracker through a middleware for ubiquitous computing [1] Bruns, E., Brombach, B., Zeidler, T. and Bimber, O., "Enabling Mobile
Phones To Support Large-Scale Museum Guidance", in IEEE Multimedia,
environment such as [3].
2007
[2] M. Kalkusch, T. Lidy, M. Knapp, G. Reitmayr, H. Kaufmann, D.
Schmalstieg, "Structured Visual Markers for Indoor Pathfinding", in
Proceedings of the First IEEE International Workshop on ARToolKit
This research is funded by ETRI OCR and the CTI development project of (ART02), 2002
KOCCA, MCT in Korea. [3] Yoosoo Oh, Woontack Woo, "How to build a Context-aware Architecture
Youngmin Park is with the Gwangju Institute of Science and Technology, for Ubiquitous VR", in ISUVR2007 (submitted)
Gwangju 500-712, S.Korea (corresponding author to provide phone: [4] BazAR, url(http://cvlab.epfl.ch/software/bazar/index.php)
+82-62-970-3157; fax: +92-62-970-2204; e-mail: ypark@gist.ac.kr).
Woontack Woo is with the Gwangju Institute of Science and Technology,
Gwangju 500-712, S.Korea (e-mail: wwoo@gist.ac.kr).