<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>HoloKeys - An Augmented Reality Application for Learning the Piano</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dominik Hackl</string-name>
          <email>dominikhackl@gmx.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christoph Anthes</string-name>
          <email>christoph.anthes@fh-hagenberg.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Applied Sciences, Upper Austria</institution>
          ,
          <addr-line>4232 Hagenberg/</addr-line>
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <fpage>140</fpage>
      <lpage>144</lpage>
      <abstract>
        <p>-This paper describes the design and the implementation approach of a piano training application. HoloKeys is an Augmented Reality tool which is capable to superimpose the keys to be played on a real piano. Musical pieces are loaded as MIDI files, interpreted and can be displayed in two different ways. This prototype provides many possibilities for extension which can make it a powerful teaching tool.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Augmented Reality (AR), described by Azuma as a
technology where the user sees ’the real world, with virtual objects
superimposed upon or composited with the real world’ [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], has
become a hot topic in the recent years. The application areas
are wide spread and range far beyond simple advertisements
and virtual manuals from advanced training to sophisticated
remote collaboration scenarios. Using AR to train musical
instruments has a long tradition in the field but because the
rapid development in AR Head-Mounted-Displays (HMDs)
this application area has gained new attention.
      </p>
      <p>We present HoloKeys, a prototypical implementation of
an AR training tool for learning the piano. HoloKeys runs
on an HMD which the user is wearing while sitting in
front of a physical piano. The application indicates notes
that are supposed to be played by displaying virtual keys
superimposing the physical keyboard with two different
approaches. Acquiring the musical data dynamically by loading
and processing MIDI (Musical Instrument Digital Interface)
files, the application is fully agnostic considering the musical
pieces to be trained. To achieve the required precision for the
augmentations on the piano, the application was implemented
using fiducial marker tracking. Since this application is a
prototype, an extensive collection of possible enhancements
and prospects for the future is given.</p>
      <p>The remainder of this paper is structured as follows: The
next chapter provides an overview of the related work in music
teaching applications. Chapter III will introduce the conceptual
design of the application describing the architecture and the
user interface. Implementation details are provided in Chapter
IV. Finally conclusions are drawn and an outlook into the
future work is given.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORK</title>
      <p>
        Music education has a long tradition in the field of AR.
In an early approach Cheng and Robinson provided a visual
sheet music overlay displayed planar in the visual field of the
user. The display of the augmentations is triggered when he
looks at the hands. The type of sheet is depending on which
hand he looks. The augmentation is not registered (meaning
it is not directly spatially interconnected) to a real object
opposed to the approach presented in this publication. An
HMD is used for display [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Cakmakci et al. augmented the
information which string to pull on a guitar with the intention
to reduce cognitive discontinuities compared to the traditional
way of learning an instrument. They were the first to provide
information on the interaction to be taken in an immediate way
on an instrument [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The registration of the guitar and the
virtual hand is implemented with the help of fiducial markers.
      </p>
      <p>
        In order to avoid the use of fiducial markers on the piano
Huang et al. use their knowledge on the application domain
and track the keys of the piano for pose estimation with
the help of natural feature recognition [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Unfortunatly they
provide no details on the diplay used, but the frame-rate of 15
frames per second, implies that it has not been developed for
a head-tracked system.
      </p>
      <p>
        Chow et al. focus on the educational level of AR piano
teaching showing that with the help of augmentations and
gamification components the motivation and interest in
learning the piano could be increased. They provided a system
illustrating the notes to be played by lines approaching the
keys. Their findings also indicate that notation literacy does not
increase using their system of illustration [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We use a similar
approach for the augmentations of the notes to be played but
rely on a optical see-through HMD instead of a video-based
HMD.
      </p>
      <p>
        Opposed to this visualisation approach Torres-Fernandez et
al. introduce a virtual character which illustrates how well the
piano player has performed. To interpret the played music they
compare the input from a MIDI keyboard with an initially
loaded MIDI file [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A similar analysis was suggested and
implemented earlier by Barakonyi and Schmalstieg [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. They
make use of fiducials for tracking and a desktop AR system
equipped with a webcam and a traditional screen.
      </p>
      <p>
        In terms of visualisation Weing et al. demonstrate a system
in the area of Spatial Augmented Reality where they project
the keys to be pressed directly on the piano. Different modes
show for example the current and the next keys to be pressed.
If a wrong key is pressed it is highlighted in red to provide
feedback to the user [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Zhang et al. use a completely virtual keyboard and track the
hand of the user with fiducial markers and the finger positions
with a self-developed data glove. Their approach targets the
rehabilitation of the motor function of stroke survivors rather
than teaching the piano [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Compared to these existing and presented approaches our
system is unique in terms of used display technology.</p>
    </sec>
    <sec id="sec-3">
      <title>III. CONCEPTUAL DESIGN The following chapter gives an overview of the application’s hardware and software components and explains how the individual parts interact with each other.</title>
      <sec id="sec-3-1">
        <title>A. Architecture Overview</title>
        <p>The application’s setup is illustrated in Fig. 1 and consists
of the following two hardware components.</p>
        <p>1) The Piano: The core component is a physical piano
which is used for the actual playing. Underneath the piano
keyboard which is usually made of 88 keys a fiducial marker
is placed which is used by the application for tracking. The
keys of a regular piano are standardized in size which makes
the application fully independent considering the type of piano.
In case a keyboard is used the key width can be adjusted.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2) The Head-Mounted-Display: The user sits in front of</title>
        <p>the piano and wears an HMD on which the application runs.
Through the HMD the user sees augmentations in the form of
highlighted keys on top of the real keyboard. The HMD also
handles tracking by recognizing the image marker with the
help of computer vision algorithms. The HMD therefore keeps
track of the player’s position and displays the augmentations
accordingly. Additionally, the HMD is responsible for sound
output of the music to be played. This gives the user an
impression on how the piece is supposed to sound and makes
it easier to play along with it.</p>
      </sec>
      <sec id="sec-3-3">
        <title>B. Interface</title>
        <p>
          In order to manage different settings and control the
playback, a simple user interface was implemented. The originally
two-dimensional UI is placed inside the 3D scene using
worldstabilized coordinates. Considering the usually static setup of
the application with the user sitting in front of the piano, the
world-stabilized menu is a reasonable approach [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. User
input works through gaze-based interaction combined with
gestures.
        </p>
        <p>1) The Main Menu: The initial scene of the application is
the main menu. There the user can select the musical piece to
play as well as the desired playback speed. By pressing the
start button the application will switch to playback mode and
begin visualizing and playing the musical piece.</p>
        <p>2) Playback Mode: In playback mode the user sees the
augmentations of the keys to be played superimposing the
physical keyboard. Additionally a timeline shows the current
playback position and gives the user the option to jump to
different positions inside the piece. With the pause button the
user is able to interrupt the playback or return to the main
menu.</p>
        <p>3) Calibration Mode: In calibration mode the application
displays an augmentation of only one key, the middle C. The
user can adjust the position of the marker until the virtual key
perfectly fits the real one. This is useful to setup the optimal
position of the marker on the piano. Additionally the user can
also adjust the pitch of the virtual piano sound in calibration
mode because this does not necessarily match with the real
piano. Playback volume can be adjusted in the HMD.</p>
      </sec>
      <sec id="sec-3-4">
        <title>C. Display of Augmentations</title>
        <p>Generally the HMD displays an augmentation of a bright
green key to indicate that the actual key on that position has
to be pressed. Two different approaches as seen in Fig. 2
were tested and both have their advantages and disadvantages
concerning predictability and Field Of View (FOV) limitations.</p>
        <p>1) The Instant Approach: The moment a key is supposed
to be pressed it becomes highlighted. Once it is supposed to
be released it switches back to normal. This way the user can
more or less observe the playing of the piece in real-time,
comparable to watch the fingers of an actual pianist. While
this approach can be useful for advanced players, it is hardly
possible to learn a new piece or even to play along with it,
because the player has no way of predicting the next notes.
Still, observing this looks great and could be used for showcase
purposes (self-playing piano), as the limited FOV is also less
of a problem there.</p>
        <p>
          2) The Beatmania Approach: Note objects are created far in
the distance and from there start moving towards the particular
keys. As soon as the virtual object reaches the real key, the
note should be played. With this approach, which became
popular with the game ’Beatmania’ [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] and is still used in
many music rhythm games today, the user can anticipate the
upcoming notes and prepare accordingly. When learning a
piano piece the musician’s brain utilizes its ’muscle memory’
and fine motor skills rather than memorizing each individual
note [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Therefore learning a piece with the Beatmania
approach should be equally efficient than learning it from sheet
music, especially for beginners.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. IMPLEMENTATION</title>
      <p>This chapter goes into detail regarding the concrete
implementation of HoloKeys. It starts with a brief overview of
used hardware and software tools followed by an in-depth
description of the two main development tasks, visualization
and MIDI processing.</p>
      <sec id="sec-4-1">
        <title>A. Used Technologies</title>
        <p>The application was developed for tablet devices as well
as the HoloLens. The tablet approach is mainly used for
demonstration purposes, rather than actual training.</p>
      </sec>
      <sec id="sec-4-2">
        <title>1) Hardware:</title>
        <p>• HoloLens1</p>
        <p>The HoloLens as a current AR HMD provides good
sensory support as well as spatial audio and stereoscopic
display capabilities. Its main disadvantage the limited
FOV poses an issue to the applicability of this use case.
2) Software: To allow cross-platform and cross-device
development the following set of tools and libraries was used.
• Unity2</p>
        <p>
          Unity is traditionally a game engine which has found
wide adoption in the whole domain of Mixed Reality
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. It allows scene setup and provides scripting
capabilities. The applications developed with Unity can easily be
deployed on a multitude of target platforms including iOS
and Android devices as well as UWP (Universal Windows
Platform) devices.
• Vuforia3
        </p>
        <p>The Augmented Reality part of the project is based on
Vuforia, an AR tracking library which perfectly integrates
1https://www.microsoft.com/en-us/hololens
2https://unity3d.com/
3https://www.vuforia.com/
with Unity. Vuforia supports several different tracking
methods ranging from recognizing plain images to
complex objects. With a specific setup, Vuforia can also be
used on the HoloLens.
• C# Synth Project and MIDI Support4</p>
        <p>The C# Synth Project is an open-source library which is
used for processing MIDI data and synthesizing it to
audio data. MIDI is an industry standard for interconnection
between musical instruments and digital devices. Its file
format represents musical information like notes values,
volume and tempo. Although MIDI is a complex format,
it is still the most popular and commonly used format to
store musical data. For piano pieces the format is usually
sufficient because only one channel is required to store a
series of notes and tempo changes.</p>
      </sec>
      <sec id="sec-4-3">
        <title>B. Visualization and Tracking</title>
        <p>The application’s visuals consist of a Unity 3D scene which
renders the virtual keys, combined with Vuforia’s tracking
abilities to provide the information on where to render the
keys.</p>
        <p>1) Vuforia’s image target: For this application tracking via
fiducial marker and image target was used. The image target
in Unity is a planar object in 3D space which is associated
with a set of 2D images. These images represent the markers
that are placed somewhere in the real world. Once the camera
recognizes a marker the application can trace back the position
of the HMD and can therefore project all augmented objects
accordingly.</p>
        <p>2) Tracking setup: Marker images and other tracking
settings can be configured in Vuforia’s web interface. This
configuration with all related assets is then compiled into a Unity
package that can be imported into Unity after that. In Unity
two components of Vuforia, ARCamera and ImageTarget, are
used. Subordinate objects of the ImageTarget become affected
by the marker-related projection.</p>
        <p>3) Generating the keyboard: In order to display the
currently played keys, first an entire virtual keyboard is displayed
half-transparently superimposing the real one. A script takes
care of automatically generating all 88 key objects. One base
key object is placed in the scene and aligned at around 90
degrees relative to the ImageTarget. This registration has to
match with the real world relation between marker and piano
keyboard. All other keys are then generated as duplicates of the
base object with respective offset and color (black or white).</p>
      </sec>
      <sec id="sec-4-4">
        <title>C. Audio and MIDI Playback</title>
        <p>The two core components of the C# Synth Project library
are the MidiSequencer which handles loading and processing
MIDI data and the MidiStreamSynthesizer which handles the
actual audio playback.</p>
        <p>4https://csharpsynthproject.codeplex.com/
1) Handling key actions: During playback the
MidiSequencer fires two events that are relevant for this
application: MidiNoteOn and MidiNoteOff. These two events are
respectively fired when the playback of a note is triggered or
terminated and therefore indicate exactly the time when a key
is pressed and released. In the implementations of these two
event handlers the MIDI code of the affected note is passed
as a parameter. The only operation is to map this MIDI code
to our according key object and set its material color to either
green (in NoteOn) or the default color (in NoteOff).</p>
      </sec>
      <sec id="sec-4-5">
        <title>2) Combining the audio sources: The MidiStreamSynthe</title>
        <p>sizer creates actual audio data based on the sequencer’s input.
To make sure that this audio data is actually redirected to
Unity’s audio source, the special method OnAudioFilterRead
has to be implemented. This method supports direct writing
into the audio buffer and therefore redirect the contents of the
StreamSynthesizer to Unity’s audio source.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>V. CONCLUSION</title>
      <p>As a prototype the application serves well, but due to the
limited FOV, which will most likely increase in the next years
with the following generations of AR hardware, its real world
usage could be doubted. Furthermore, an evaluation of the
different augmentation methods would be useful. Especially
when trying out a few more possible approaches, a user test
could find out which of the methods are most likely to work
in a real-world scenario. A more in-depth study of musical
augmentation methods would also be useful for teaching other
instruments or even in completely different areas of music.</p>
      <sec id="sec-5-1">
        <title>A. Future Work - The Virtual Piano Teacher</title>
        <p>A long-term vision could be the creation of a full-featured
virtual piano teacher using AR. Especially early-stage piano
learning contains many tasks that could be implemented with
AR technologies like the one explained in this paper combined
with gamification elements.</p>
      </sec>
      <sec id="sec-5-2">
        <title>1) Use Cases:</title>
        <p>• Learning notes and the piano keyboard</p>
        <p>Simple exercises or games to recognize the note names
and match it with the proper keys could really increase the
early-stage learning rate. For beginners the note names
could be augmented on top of every key until they
become familiar with it.
• Learning easy to intermediate musical pieces
Especially for smaller pieces the AR learning approach
could surpass traditional learning by music sheets.
Beginners who are not used to reading music yet, would still
be able to learn pieces quickly on their own. Additionally
a lot more useful information like fingering, expression
and dynamics could be displayed during playback.
• Technical exercises</p>
        <p>The importance of regular technical exercises for piano
students is huge but generally underestimated and
disliked. With the introduction of AR and gamification, a
whole lot of enjoyable and still pianistically valuable
exercises could be realized. By adding some sort of level
system, the student would be even more aware of his
progress and more likely to remain motivated.
• Dictionary of chords, scales etc.</p>
        <p>A very useful utility not only for beginners but also
for advanced pianists would be a piano dictionary. The
player could look up all possible chords and scales and
would be able to see them highlighted right on top of
his keyboard. Especially for jazz piano where complex
chords and scales are common, this technology would be
of great service.</p>
      </sec>
      <sec id="sec-5-3">
        <title>2) Further Improvements:</title>
        <p>• Using music sheets as markers</p>
        <p>The use of music sheets, perhaps in the form of a special
music book, as fiducial markers could eliminate the need
for additional markers placed on the piano. It could not
only automatically detect the musical piece to be played
but also indicate, when to turn the sheets or even highlight
musical attributes on the sheets.
• Checking the learning performance</p>
        <p>
          Real-time feedback of the user’s playing could greatly
contribute to the learning experience. This could be
achieved on the one hand by using MIDI keyboards
to directly receive the MIDI input of pressed keys or
on the other hand by recording and deconstructing the
audio data. The first approach would be technologically
straight-forward but would limit the application to
electronic keyboard instruments while the second approach
would be more flexible but complicated to implement and
perhaps inaccurate [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <p>The possibilities of the virtual piano teacher are enormous
but all are based on the core concept of the technique explained
in this paper. As soon as there are improvements in AR
hardware, especially concerning FOV, virtual piano teachers
can be implemented and actually start to become a helpful
tool.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. T.</given-names>
            <surname>Azuma</surname>
          </string-name>
          ,
          <article-title>A“ survey of augmented reality,” Presence: Teleoperators and Virtual Environments</article-title>
          , vol.
          <volume>6</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>355</fpage>
          -
          <lpage>385</lpage>
          ,
          <year>August 1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] L.-T. Cheng and J. Robinson, “
          <article-title>Personal contextual awareness through visual focus,” IEEE Intelligent Systems</article-title>
          , vol.
          <volume>16</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Cakmakci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Brard</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Coutaz</surname>
          </string-name>
          ,
          <string-name>
            <surname>A“</surname>
          </string-name>
          <article-title>n augmented reality based learning assistant for electric bass guitar</article-title>
          ,
          <source>” in 10th International Conference on Human-Computer Interaction</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Du</surname>
          </string-name>
          , “
          <string-name>
            <surname>Piano</surname>
            <given-names>AR</given-names>
          </string-name>
          :
          <article-title>A markerless augmented reality based piano teaching system</article-title>
          ,
          <source>” in Third International Conference on Intelligent Human-Machine Systems and Cybernetics</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Amor</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Wunsche</surname>
          </string-name>
          , “
          <article-title>Music education using augmented reality with a head mounted display,” in Fourteenth Australasian User Interface Conference (AUIC2013)</article-title>
          . Melbourne, Australia: ACM, Jan.
          <year>2013</year>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C. A. T.</given-names>
            <surname>Fernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Paliyawan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Yin</surname>
          </string-name>
          , “
          <article-title>Piano learning application with feedback provided by an ar virtual character</article-title>
          ,
          <source>” in 5th Global Conference on Consumer Electronics. Kyoto</source>
          , Japan: IEEE, Oct.
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I.</given-names>
            <surname>Barakonyi</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Schmalstieg</surname>
          </string-name>
          ,
          <article-title>A“ugmented reality agents in the development pipeline of computer entertainment,” in 4th international conference on Entertainment Computing (ICEC'05)</article-title>
          . Sanda, Japan: Springer, Sep.
          <year>2005</year>
          , pp.
          <fpage>345</fpage>
          -
          <lpage>356</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Weing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rhlig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gugenheimer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schaub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Knings</surname>
          </string-name>
          , E. Rukzio, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <string-name>
            <surname>“P.i.a.n.o.</surname>
          </string-name>
          :
          <article-title>Enhancing instrument learning via interactive projected augmentation,” in Conference on Pervasive and ubiquitous computing adjunct publication (UbiComp13)</article-title>
          . Zurich, Switzerland: ACM, Sep.
          <year>2013</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ong</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Nee</surname>
          </string-name>
          ,
          <string-name>
            <surname>A“</surname>
          </string-name>
          <article-title>n affordable augmented reality based rehabilitation system for hand motions</article-title>
          ,
          <source>” in International Conference on Cyberworlds (CW '10)</source>
          . Singapore, Singapore: IEEE, Oct.
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Billinghurst</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Kato</surname>
          </string-name>
          , “
          <article-title>Collaborative mixed reality</article-title>
          ,” in
          <source>International Symposium on Mixed Reality (ISMR '99)</source>
          . Springer,
          <year>1999</year>
          , pp.
          <fpage>261</fpage>
          -
          <lpage>284</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Steinberg</surname>
          </string-name>
          , Music Games Rock.
          <source>P3: Power Play Publishing</source>
          ,
          <year>2011</year>
          . [Online]. Available: http://www.musicgamesrock.com/
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shusterman</surname>
          </string-name>
          , “
          <article-title>Muscle memory and the somaesthetic pathologies of everyday life,” Human Movement</article-title>
          , vol.
          <volume>12</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Milgram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Takemura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Utsumi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Kishino</surname>
          </string-name>
          ,
          <article-title>A“ugmented reality: A class of displays on the reality-virtuality continuum,” Presence: Telemanipulator and Telepresence Technologies</article-title>
          , vol.
          <volume>2351</volume>
          , pp.
          <fpage>282</fpage>
          -
          <lpage>292</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dixon</surname>
          </string-name>
          , “
          <article-title>On the computer recognition of solo piano music</article-title>
          ,” in Proceedings of Australasian computer music conference,
          <year>2000</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>