<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>RV-XoverKit: Mixed Reality Content Creation Toolkit to Connect Real and Virtual Spaces</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yumi Fukuda</string-name>
          <email>y-fukuda@rm2c.ise.ritsumei.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ayumu Shikishima</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asako Kimura</string-name>
          <email>asa@rm2c.ise.ritsumei.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hideyuki Tamura</string-name>
          <email>HideyTamura@acm.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shibata</string-name>
          <email>fshibata@is.ritsumei.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>APMAR'22: Asia-Pacific Workshop on Mixed and Augmented Reality</institution>
          ,
          <addr-line>Dec. 02-03, 2022, Yokohama</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Graduate School of Information Science and Engineering, Ritsumeikan University</institution>
          ,
          <addr-line>1-1-1 Nojihigashi, Kusatsu, 525-8577</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Research Organization of Science and Technology, Ritsumeikan University</institution>
          ,
          <addr-line>1-1-1 Nojihigashi, Kusatsu, 525-8577</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We are attempting to systematize technologies used to transmit the dynamic phenomena of objects moving back and forth between real space and virtual space. We refer to this technology as R-V Crossover Rendition. The toolkit that embodies the R-V Crossover Rendition concept is called RV-XoverKit, and this paper describes the design and implementation of the RVMessengerKit, which is a form of the RV-XoverKit toolkit. The RV-MessengerKit toolkit is designed based on LEGO® Mindstorm® machines and consists of sensors and actuators, the corresponding control units, and the corresponding Application Programming Interfaces. First, we classify the dynamic phenomena to be transmitted, and then we describe the RVMessengerKit in detail based on the classification results. In addition, we introduce several use cases to demonstrate the practical application of the proposed RV-MessengerKit. We also prepared the implementation of RV-MessengerKit using two different methods in order to examine the difference in time delay between the two methods. The difference between the two methods was whether the values were processed in Mindstorm® or in Unity on a PC. As a result, it was found that the time delay was smaller for processing within Unity than for the Mindstorm's® internal processing.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Mixed Reality</kwd>
        <kwd>R-V Crossover Rendition</kwd>
        <kwd>RV-XoverKit</kwd>
        <kwd>RV-MessengerKit</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Interest in mixed reality (MR) technology has
been increasing rapidly. With the virtual reality
(VR) boom that began several years ago,
augmented reality and MR, which are advanced
forms of VR, are attracting significant attention.
In line with this trend, the supply of various
lowcost HMDs and developer tools has increased.
Improvements in spatial positioning accuracy and
CG rendering capabilities in both real and virtual
spaces are driving the development of attractive
and practical use cases. In addition to the
improved quality of images superimposed on the
real world, real-time interaction with the virtual
world is also utilized effectively.</p>
      <p>
        Under these circumstances, it is expected that
the expressive power of MR content will be
enhanced further. However, to the best of our
knowledge, there have been virtually no attempts
to transmit the dynamic phenomena of objects
between real and virtual spaces. Here transmitting
the dynamic phenomena of objects means to
transmit the movement of an object in the real
space, e.g., the motion of a ball rolling down a
slope, to the virtual space (or vice versa).
Superimposition of a moving virtual object on a
stationary real scene has been introduced into MR
demonstrations using a sword-shaped device with
an HMD [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and in MR attractions that use the
user’s hand movements as input [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However, in
these applications, it was difficult to switch
seamlessly between moving real and virtual
objects. For systems that use shape displays in
three different ways to mediate interaction [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and
those that superimposes CG on a small tank robot
to embody a battlefield in virtual space [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], both
the real and virtual spaces are synchronized to
realize interactivity between the real and CG
objects. However, these examples have not
achieved seamless transitions between dynamic
objects. There is another system [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] in which real
and virtual dominoes are connected and interact
via a special tunnel; however, this is not really a
seamless connection between reality and
virtuality because it hides the connection where
reality and virtuality are interchanged. Though
drawing dynamically changing phenomena in a
virtual space and superimposing them onto the
real space is a natural application, it is rare to
represent dynamic objects that go back and forth
across the reality and virtuality boundary (R-V
boundary).
      </p>
      <p>
        Our research group refers to this concept as the
R-V Crossover Rendition, and our goal is to
systematize technology to realize R-V Crossover
Rendition. The origin of this research was the
production of DOMINO Toppling, which is an
MR attraction based on the theme of domino
toppling [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This work received high praise at
the ISMAR 2015 technical exhibition; however, it
was limited to domino toppling. It treated only
dominoes as the target objects moving back and
forth between the real and virtual spaces.
      </p>
      <p>The potential targets of R-V Crossover
Rendition are extremely wide. It can be deployed
in the entertainment and exhibition fields, as well
as urban planning and medical product design and
manufacturing. Thus, as the next step, we began
to generalize R-V Crossover Rendition and
organize its concepts and terminology. In addition,
we designed a toolkit that can be used by anyone
who creates MR content. This toolkit provides an
effective mechanism to transmit dynamic
phenomena from the real space to the virtual space
(or vice versa).</p>
      <p>The remainder of this paper is organized as
follows. In Section 2, we first organize the
concept of R-V Crossover Rendition and define
related terms. In Section 3, we describe the design
and implementation of RV-MessengerKit, which
is the toolkit to realize R-V Crossover Rendition,
and in Section 4, we introduce usage examples of
RV-MessengerKit and discuss a performance test.
Conclusions and suggestion for future work are
presented in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. R-V Crossover Rendition</title>
    </sec>
    <sec id="sec-3">
      <title>2.1. Overview</title>
      <p>There are two reasons why the DOMINO
Toppling attraction was well received, i.e.,
switching between the real and virtual dominoes
was seamless and not immediately apparent, and
the virtual dominoes behaved without physical
limitations. The flexibility in designing content in
virtual space is a great attraction for the
entertainment and education fields, and the
repetitive movement of the R-V boundary
enhanced the attractiveness of the system.</p>
      <p>Here, the target was limited to domino
toppling and was specialized to create a
mechanism to detect toppled dominoes and knock
down dominoes. In this system, a tactile switch
was used to detect whether a real domino tile had
been toppled over, and a solenoid actuator
underneath the real domino was used to initiate
domino toppling by applying force to the bottom
of the domino tile. Figure 1 shows the mechanism
to detect and actuate the domino toppling process.
The information to be transmitted in DOMINO
Toppling is limited to the domino tile toppling
phenomena.</p>
      <p>To generalize R-V Crossover Rendition, we
must consider the type of information that should
be transmitted between the real and virtual spaces
at the R-V boundary. In some cases, we want to
inherit the exact shape and dynamic state of the
moving objects when they are at the R-V
boundary. In other cases, we want to transmit the
information of a result that transforms the
dynamic phenomenon of an object to the other
space. Here, the transformation may be a
nonlinear transformation, a geometric
transformation, symbolization, or digitization.
The former can be considered information
transfer in similar form, and the latter can be
considered information transfer in non-similar
form.</p>
      <p>Thus, we classify R-V Crossover Rendition as
R-V Transition, which inherits the shape and
dynamic state at the R-V boundary, and R-V
Message Transmission, which converts and
transmits information at the R-V boundary. In
other words, R-V Crossover Rendition is the
universal set, and R-V Message Transmission is
the complementary set of R-V Transition. Note
that the goal of R-V Transition is to transmit the
dynamic phenomena of the object as accurately as
possible, and R-V Message Transmission makes
it possible to transform the object when crossing
the R-V boundary or create a different movement.</p>
      <p>The toolkits to realize these concepts are called
RV-XoverKit, RV-TransitionKit, and
RVMessengerKit, respectively. The RV-XoverKit
toolkit is a collection of the other two. Figure 2
shows the relationship among these concepts and
terms. Each toolkit comprises hardware units with
sensors/actuators and control computers, as well
as software modules that operate the hardware
units.</p>
    </sec>
    <sec id="sec-4">
      <title>2.2. Target Field and Assumptions of Implementation</title>
      <p>In the previous section, we organized the R-V
Crossover Rendition concept and identified the
related toolkits. However, the target field is too
broad to design and implement RV-XoverKit.
Thus, in this study, we narrowed down the target
to chain reaction phenomena in the edutainment
field and examined concrete functional design and
practical implementation methods.</p>
      <p>
        Domino toppling is a typical example of a
chain reaction, and a more complex example is the
“Pythagorean devices” demonstrated in a
Japanese educational television program
(Pythagora Switch) that has been aired by NHK
since 2002. A series of tricks called the Rube
Goldberg Machine [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] also falls into this category.
      </p>
      <p>In the following, we consider the realization of
such chain reaction phenomena in the MR space
and describe the RV-XoverKit. Recall that the
target field is edutainment; thus, simplicity is
more important than strictness in information
transmission at the R-V boundary. Accordingly,
the goal is to design RV-MessengerKit, which is
one form of the RV-XoverKit.</p>
      <p>In this study, we attempted to realize the
RVMessengerKit toolkit to create works that
interweave real and virtual spaces in the
edutainment field under the R-V Message
Transmission concept. For the functional design
and implementation of RV-MessengerKit, we
used LEGO® Mindstorms® because these
devices have been used extensively in the
education field, and they include development
environment that is suitable for making creative
content.</p>
      <p>As one of the works that interweave real and
virtual spaces, it is conceivable to create
Pythagora Switch in MR space, which we refer to
as the MR Pythagora Switch. In the following, we
describe the design of RV-MessengerKit, which
is equipped with the functions required to realize
the MR Pythagora Switch. The Pythagorean
devices operate only in the real world and only
deal with chain reaction phenomena in fixed
scenarios; our MR Pythagora Switch enables us to
design attractions with branching scenarios.</p>
    </sec>
    <sec id="sec-5">
      <title>3. RV-MessengerKit</title>
    </sec>
    <sec id="sec-6">
      <title>3.1. Information Transmission Items</title>
      <p>Here, we describe the functional design of
RV-MessengerKit and present implementation
examples. As mentioned previously, the target is
narrowed down to chain reaction phenomena in
the edutainment field, and we design the
functions of RV-MessengerKit assuming the use
of LEGO® Mindstorms® for implementation.
The basic set of Mindstorms® includes several
types of sensors and actuators, which can be
combined to create robots and interactive systems,
and many examples that can be used with LEGO®
bricks are provided. This time, we implemented
our toolkit using Mindstorm®, however it can
also be implemented using other small computers
such as Arduino, Raspberry Pi, and so on. In that
case, it is necessary to start from the assembly of
the circuit to use sensors and actuators.</p>
      <p>During the design process, we envisioned the
MR Pythagora Switch as a specific use case of the
RV-MessengerKit, which allows the user to
experience chain reaction phenomena. The types
of sensors and actuators attached to Mindstorms®
are limited, and their accuracy is not relatively
good thus, the RV-MessengerKit must be
designed according to the types and accuracy of
these devices.</p>
      <p>As R-V Message Transmission at the R-V
boundary, we consider detecting and transmitting
the physical state of the target object, e.g., fallen,
moved, or rotated. This makes it possible to play
the role of a switch or trigger that causes a chain
reaction in the other space. Tables 1–4 show the
functions realized in RV-MessengerKit.</p>
      <p>Table 1 summarizes the type of information
transmitted from the real space to the virtual space
at the R-V boundary. This information is referred
to as the RtoV Transmission Items. When
information is transmitted from real space to
virtual space, the phenomena in the real space are
detected using sensors. Table 1 also describes the
sensors used in Mindstorms®.</p>
      <p>There are seven types of RtoV Transmission
Items. “Tilted” describes the phenomenon that a
real object tilts a virtual object and “Pressed”
describes the phenomenon that a real object
pushes a virtual object. Similarly, “Position” is
used to reflect the position of a real object to a
virtual object and “Translation” is used to reflect
the distance moved by a real object to a virtual
object. “Rotation” is used to reflect the rotational
angle of a real object to a virtual object.
“Brightness” is used when you want the virtual
side to reflect the object’s state according to the
ambient brightness and “Color” is used to transmit
the object’s color in the real to the virtual side.</p>
      <p>Table 2 summarizes the type of information
transmitted from the virtual space to the real space
at the R-V boundary. This information is referred
to as the VtoR Transmission Items. When
information is transmitted from virtual space to
the real space, some action is performed on the
real space using actuators based on the state of the
virtual space. The actuator that can be used in
Mindstorms® is only a motor; thus, we are limited
to using this motor.</p>
      <p>There are four types of VtoR Transmission
Items. “Tilted” is used to transmit tilt of a virtual
object to the real and “Pressed” is used to transmit
the phenomenon that a virtual object pushes the
real object. “Translation” is used to transmit the
distance a virtual object has traveled to the real
scene and “Rotation” is used to transmit the
rotation of a virtual object to the real scene.</p>
      <p>Tables 3 and 4 show the parameters for the
RtoV Transmission Items and VtoR Transmission
Items, respectively. In the following, we describe
the usage details of each sensor and actuator.</p>
      <sec id="sec-6-1">
        <title>Touch Sensor</title>
        <p>Touch sensors are used for RtoV’s “Tilted”
and “Pressed.” The touch sensor attached to
Mindstorms® can only detect the binary state of
ON/OFF. Thus, RtoV’s “Tilted” does not have a
value specified by the content developer and only
conveys whether the target object is tilted/not
tilted. RtoV’s “Pressed” only conveys whether the
target object is pressed/not pressed.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Ultrasonic Sensor</title>
        <p>Ultrasonic sensors are used for RtoV’s
“Position” and “Translation.” The ultrasonic
sensor can detect an object at a distance of 3–252
cm on a straight line from the sensor position.
Here, the “Max distance” concept is used to
specify how far the ultrasonic sensor will detect.
In consideration of actual measurement accuracy,
content developers can specify a “Max distance”
between 5 and 250 cm. In addition, here the
“Number of divisions” is a value to determine
how many divisions are to be made between 5 and
250 cm specified by the content developer. When
using RtoV’s “Position,” it is necessary to specify
the “Number of divisions.” As shown in Figure 3,
the distance of the detected object is returned as a
value according to the specified “Number of
divisions.” Here, a value of zero is returned if no
object is detected. When using RtoV’s
“Translation,” it is also necessary to specify the
“Number of divisions.” As shown in Figure 4, the
moving distance of the object is returned as a
value according to the specified “Number of
divisions.” The returned “Translation” value takes
a positive or negative value. If the value detected
by the ultrasonic sensor is greater than the initial
position of the object, a positive value is returned.
In contrast, if the value of the ultrasonic sensor is
less than the initial position of the object, a
negative value is returned.</p>
      </sec>
      <sec id="sec-6-3">
        <title>Motor (for sensing)</title>
        <p>A motor is originally an actuator; however, if
the target object can be fixed with an appropriate
attachment, the motor can also measure the
rotation angle; thus, the motor is used in RtoV’s
“Rotation” (Figure 5). Here, the “Number of
divisions” is a value that specifies how many
divisions of 360° are to be made. Note that there
are two types of RtoV’s “Rotation,” i.e., an
absolute value and a relative value. When
returning an absolute value, the value increases
when the motor rotates clockwise, and the same
value is returned when the motor rotates
counterclockwise as when it rotates clockwise
(Figure 6). As shown in Figure 7, when returning
a relative value, the difference between before and
after the rotation is returned. Here, the returned
value is positive if the motor rotates clockwise and
negative if the motor rotates counterclockwise.</p>
      </sec>
      <sec id="sec-6-4">
        <title>Color Sensor</title>
        <p>
          The color sensor of Mindstorms® can measure
the brightness of the ambient light and the color
of the observed object. Here, “Brightness” and
“Color” of RtoV use the color sensor. At startup,
RtoV’s “Brightness” acquires the initial value and
uses this value and the “Number of divisions” to
determine brightness changes. The lower range
between the minimum brightness value, i.e., zero,
and the initial value is divided by the “Number of
divisions.” The upper range between the initial
(a) Large Motor (b) Medium Motor
Figure 5: Direction of rotation for RtoV’s
Rotation
value and the maximum brightness value, i.e., one,
is also divided by the “Number of divisions.” The
current value is determined according to which of
the divided ranges the current ambient brightness
is located. RtoV’s “Brightness” comprises two
types, i.e., one that returns the difference between
the initial value and the current value, and another
that returns the latest difference between the most
recent value and the current value. The former,
which is referred to as “Brightness (Initial Diff.),”
returns the difference between the initial and
current brightness values (Figure 8). The latter,
which is referred to as “Brightness (Latest Diff.),”
returns the difference between the most recent
value and the current value (Figure 9). RtoV’s
“Color” returns a color name if the color is
recognized from the initial state; otherwise,
colorless is returned if a color is not recognized.
Based on the colors listed on the official LEGO®
brick purchase page [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], 32 colors can be
recognized by the color sensor.
        </p>
      </sec>
      <sec id="sec-6-5">
        <title>Motor (for actuating)</title>
        <p>Only motors can be used as actuators in
Mindstorms®; thus, the “Tilted,” “Pressed,”
“Translation,” and “Rotation” of VtoR must be
realized using motors. When a motor is used as an
actuator, it is essentially controlled by giving
“Rotation speed” and “Rotation direction”
parameters. Here, “Rotation speed” is given as
degrees per second. For “Rotation direction,” a
positive value is given to rotate clockwise, a
negative value is given to rotate counterclockwise,
and OFF means no rotation. In the case of “Tilted”
or “Pressed” of VtoR, rotary motion is converted
into linear motion using a mechanism, e.g., a
crank, and is connected to the motion of a real
object. However, only in the case of VtoR’s
“Rotation,” it can also be used by specifying two
values, “Rotation speed” and “Number of
divisions.” In this case, a circle is divided
according to the “Number of divisions,” and the
number of the divided range is given to rotate the
motor.
3.2.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Implementation</title>
      <p>According to the specifications described in
the previous section, we decided to use the leJOS
firmware, which allows Mindstorm® to be
programmed using Java rather than the standard
software.</p>
      <p>Mindstorm® has a variety of parts; thus, there
are multiple options for hardware implementation
of the Transmission Items described in Tables 2
and 4. For example, Figure 10 shows the hardware
units used for RtoV’s “Pressed.” Here, Type A is
designed to allow pressing from a different
direction than that of the original touch sensor,
and Type B is designed to be attached to an
original device created by the content developer.</p>
      <p>Figure 11 shows the hardware units used for
VtoR’s “Rotation.” Here, Types A and B
correspond to the roll and pitch directions of
rotation, respectively. Note that the gear rotation
ratio can be adjusted by changing the gear
components. As a result, content developers can
use different hardware units to develop various
use cases. In addition, content developers can use
LEGO® bricks to expand their own hardware
units.</p>
      <p>We provide RV-MessengerKit users with
blueprints to facilitate the use of the hardware unit.
Each blueprint was created using the LEGO®
Digital Designer, which is free software that
allows users to assemble LEGO® bricks on a PC.
The LEGO® Digital Designer has a building
guide mode that allows the user to see how to
assemble a product in sequence; thus, it is easy to
construct the product according to the blueprints.</p>
      <p>We recommend using Unity to create MR
content with RV-XoverKit, including
RVTransitionKit and RV-MessengerKit. The
software module is implemented as a program that
runs on Unity and processes the information sent
from a hardware unit over network
communication. Here, content developers use
RV-XoverKit via the Application Programming
Interfaces of the software module in Unity rather
than directly using the program written on the
Mindstorm®.</p>
    </sec>
    <sec id="sec-8">
      <title>4. Use Cases</title>
    </sec>
    <sec id="sec-9">
      <title>4.1. Simple Examples</title>
      <p>In the following, we demonstrate four use case
examples of the RV-MessengerKit. Here, we used
an HTC VIVE Pro equipped with a Stereolabs
ZED Mini to realize the MR experience. HTC
VIVE Pro is a video see-through head-mounted
(a) Type A (b) Type B
Figure 10: Hardware units used for RtoV’s
Pressed</p>
      <p>(a) Type A (b) Type B
Figure 11: Hardware units used for VtoR’s
Rotation
display (HMD), and ZED mini is a stereo camera
that is used to attach to an HMD.
(A) Example of RtoV’s Pressed (Figure 12)
• Object in the real space: A blue marble
placed in the upper left corner of Figure
12(b).
• Object in the virtual space: A brown sphere
shown in the center of Figure 12(b).
• Phenomena before and after the
transmission: The real marble rolls down the
slope created with LEGO® bricks and
pushes a touch sensor attached to the end of
the slope. When the touch sensor is pressed,
the information is transmitted from the real
space to the virtual space, and the virtual
sphere begins to move. As a result, the real
marble appears to be flicking the virtual
sphere.
(B) Example of VtoR’s Pressed (Figure 13)
• Object in the real space: A blue marble
placed in the center of Figure 13(b).
• Object in the virtual space: A brown sphere
shown on the left of Figure 13(b).
• Phenomena before and after transmission:
When the virtual sphere rolling on the rail
touches a real marble, the information is
transmitted from the virtual space to the real
space, and the motor rotates to flip the real
marble. As a result, the virtual sphere
appears to push the real marble.
(C) Example of RtoV’s Rotation (Absolute)
(Figure 14)
• Object in the real space: A straight LEGO®
beam attached to a motor.
• Object in the virtual space: A turntable and
numbers displayed on the turntable.
• Number of divisions: four
• Phenomena before and after transmission:
When the real motor is rotated manually, the
information is transmitted from the real
space to the virtual space, and the division
range corresponding to the current angle of
the straight beam is detected and output as a
number. Based on the output number, a
virtual number is then displayed on the
virtual turntable.
(D) Example of VtoR’s Rotation (Figure 15)
• Object in real space: A straight LEGO®
beam attached to a motor.
• Object in virtual space: A turntable and
numbers displayed on the turntable.
• Number of divisions: four
• Phenomena before and after transmission:
The rotation direction of the motor changes
(a) Reality only (b) Initial state
Figure 13: Example of VtoR’s Pressed
(c) In-progress
(d) Post-transmission
(a) Initial state (b) Division range 1
Figure 14: Example of RtoV’s Rotation (Absolute)
(c) Division range 12
(d) Division range 4
(a) Initial state (b) Division range 1
Figure 15: Example of VtoR’s Rotation
according to the numbers displayed on the
turntable. If a positive number is shown, the
straight beam rotates clockwise (and vice
versa). In other words, when a virtual
number is displayed, the straight beam
moves to the same division range as the
number displayed on the turntable.</p>
      <p>
        In addition to these examples, we also
demonstrated MR Pythagora Switch [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. A demo
video of MR Pythagora Switch 3rd is available at
https://youtu.be/PRwYFJksrKo.
      </p>
      <p>This is a practical example of the “Pythagorean
devices” using the RV-XoverKit. Figure 16 shows
the overall view of MR Pythagora Switch taken
from the real space. In this figure, there were no
virtual objects, thus the Pythagorean devices did
not connect each other. Figure 17 shows MR
Pythagora Switch as seen by a person wearing and
experiencing an HMD. The figure shows the
Pythagorean devices run while alternating
between reality and virtuality.</p>
      <p>When we asked the participants to use
RVMessengerKit and surveyed their impressions of
its use, we received the following feedbacks. The
toolkit was implemented using Mindstorm®, so it
was easy to use once the sensors and actuators to
be used are installed and the parameters to be used
(c) Division range 4
(d) Division range -3
in Transmission Items are specified. Moreover,
the parameter “number of divisions” was also
introduced, allowing the range used for judgment
to be intuitively divided and used, and some said
it was easier to use than receiving the actual
values of the sensors. Regarding the connection
between the real and virtual spaces, we got the
answer that even if there was a delay, using the
toolkit makes it possible to create what appears to
be a smooth connection between the real and
virtual objects.
4.2.</p>
    </sec>
    <sec id="sec-10">
      <title>Performance Test</title>
      <p>RV-XoverKit transmits information from the
real space to the virtual space (or vice versa) for
processing. As a result, the time delay in
information transmission may become a problem.
The current implementation of RV-MessengerKit
is intended for use with Unity; thus, there may be
a time delay due to the communication speed
before the sensor information reaches Unity from
the Mindstorm® or before the actuator's operation
instructions from Unity reach the Mindstorm®.</p>
      <p>In the following, we describe the
implementation of RV-MessengerKit using two
methods in order to examine which method can
convey information with less time delay. The first
method is the Mindstorm’s® internal processing
method, where the process to convert the acquired
value is performed within the Mindstorm®, and
only the converted values are sent to Unity. The
other method involves processing within Unity.
Here, all processing is performed on the Unity
side.</p>
      <p>The three Transmission Items to be examined
are VtoR’s “Tilted,” VtoR’s “Pressed,” and
VtoR’s “Translation.” We did not consider
RtoV’s Transmission Items because if we
attempted to measure the time delay, we would
need to measure the time from when the object
touched the sensor until the time at which the
virtual object began to move, and we decided that
it would be difficult to determine the start time.
Here, the measurement time is the time from the
moment a virtual object contacts a specific point
where it is assumed to have touched a real object
to the moment Unity receives a signal that the
actuator, i.e., a motor, has moved.</p>
      <p>The results are shown in Table 5. The results
represent the average times of 30 measurements
for each Transmission Item (rounded to two
decimal places). The results with the
Mindstorm’s® internal processing method are
0.267 sec for VtoR’s “Tilted,” 0.291 sec for
VtoR’s “Pressed”, and 0.375 sec for VtoR’s
“Translation”. In contrast, the results with
processing within Unity method are 0.193 sec for
VtoR’s “Tilted,” 0.183 sec for VtoR’s “Pressed,”
and 0.211 sec for VtoR’s “Translation.” As can be
seen, the time delay is smaller for processing
within Unity than for the Mindstorm’s® internal
processing for all three Transmission Items. This
indicates that the information can be transmitted
faster when processing is performed in Unity.</p>
      <p>However, the time delay needs to be much
shorter. Because the frame rate of the ZED mini
used is 60 fps, it is idealy desirable to keep it
within 0.01 second. This is the goal we need to
aim for in the future.</p>
    </sec>
    <sec id="sec-11">
      <title>5. Conclusion</title>
      <p>In ongoing research, we are investigating the
seamless transition of moving objects between
real and virtual spaces to expand the practical
applications of MR technologies and realize more
attractive MR-based information presentation
systems. We believe that systematization of
related technologies will lead to the development
of new applications of MR technology.</p>
      <p>Thus, in this paper, we organized the R-V
Crossover Rendition concept, which attempts to
systematize technology to transmit the dynamic
phenomena of objects in real space to virtual
space (and vice versa). To realize this concept, we
designed and implemented the RV-MessengerKit
toolkit using LEGO® Mindstorms®. In addition,
we have introduced use cases of the
RVMessengerKit toolkit and performed a
performance test to evaluate the time delay of
different processing methods.</p>
      <p>In the future, we would like to extend the
proposed toolkit to include additional
Transmission Items for electricity and magnetism,
which were not implemented this time, and to
create tools to support the development of MR
content using the RV-XoverKit. We are also
considering adding a mechanism that generates
sound when a real collides with a virtual object (or
vice versa), where content developers can decide
what kind of sounds to generate using the
mechanism.</p>
    </sec>
    <sec id="sec-12">
      <title>Acknowledgments</title>
      <p>We thank Mr. Junya Ishida, an alumnus of the
Graduate School of Information Science and
Engineering, Ritsumeikan University, for his
guidance and cooperation in this research. We
express our sincere gratitude to him.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Inoue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kitamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nishino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ichikari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tenmoku</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ohshima</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Tamura</surname>
          </string-name>
          . Kaidan:
          <article-title>Japanese horror experience in interactive mixed reality space</article-title>
          ,
          <source>SIGGRAPH ASIA</source>
          <year>2009</year>
          ,
          <string-name>
            <given-names>Emerging</given-names>
            <surname>Technologies</surname>
          </string-name>
          , page
          <volume>75</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Takemura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Haraguchi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ohta. BLADESHIPS -</surname>
          </string-name>
          <article-title>An interactive attraction in mixed reality-</article-title>
          ,
          <source>In Proceedings of SIGGRAPH</source>
          <year>2004</year>
          , Sketches,
          <volume>101</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Follmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Leithinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Olwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogge</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Ishii</surname>
          </string-name>
          <article-title>: inFORM Dynamic physical affordances and constraints</article-title>
          ,
          <source>Proc. 26th Symp. on User Interface Software and Technology (UIST 13)</source>
          , pp.
          <fpage>417</fpage>
          -
          <lpage>426</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kojima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sugimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nakamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tomita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nii</surname>
          </string-name>
          and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Inami: Augmented coliseum: an augmented game environment with small vehicles</article-title>
          ,
          <source>First IEEE International Workshop on Horizontal Interactive HumanComputer Systems (TABLETOP '06)</source>
          ,
          <year>2006</year>
          , pp.
          <volume>6</volume>
          pp.-,
          <source>doi: 10</source>
          .1109/TABLETOP.
          <year>2006</year>
          .
          <volume>3</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Leitner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Koeffel</surname>
          </string-name>
          , and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Haller: Bridging the gap between real and virtual objects for tabletop games</article-title>
          . In Workshop at ISMAR,
          <year>2007</year>
          .
          <fpage>41</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hirata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ishibashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Shibata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kimura</surname>
          </string-name>
          , and
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>Tamura: DOMINO (do mixed-reality non-stop) toppling</article-title>
          ,
          <source>Proc. 14th Int. Symp. on Mixed and Augmented Reality</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hirata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ishibashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ikeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Shibata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kimura</surname>
          </string-name>
          , and
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>Tamura: DOMINO Toppling: An MR attraction focusing on R-V continuum, Transactions of the Virtual Reality Society of Japan</article-title>
          , Vol.
          <volume>21</volume>
          , No.
          <issue>3</issue>
          , pp.
          <fpage>463</fpage>
          -
          <lpage>472</lpage>
          ,
          <year>2016</year>
          (in Japanese).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Rube</surname>
          </string-name>
          <article-title>Goldberg machine</article-title>
          , https://en.wikipedia.org/wiki/Rube_Goldber g_
          <source>machine (November</source>
          <volume>4</volume>
          ,
          <year>2022</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Pick</surname>
          </string-name>
          a Brick, https://www.lego.com/enus/page/static/pick
          <article-title>-a-brick?query=&amp;page=1 (January 10,</article-title>
          <year>2022</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fukuda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shikishima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ishida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kimura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tamura</surname>
          </string-name>
          and
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>Shibata: The third generation MR Pythagora siblings with RVXoverKit -An example of using a tool suitable for creating edutainment works-</article-title>
          ,
          <source>Proc. of the 26th Annual Conference of the Virtual Reality Society of Japan, 2D1-2</source>
          ,
          <year>2021</year>
          (in Japanese).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>