<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Describing Movements for Motion Gestures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bashar Altakrouri</string-name>
          <email>altakrouri@itm.uni-luebeck.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Author Keywords</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Schrader</string-name>
          <email>schrader@itm.uni-luebeck.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ambient Computing Group, Institute of</institution>
          ,
          <addr-line>Telematics</addr-line>
          ,
          <institution>University of Luebeck</institution>
          ,
          <addr-line>Luebeck</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Natural User Interfaces (NUI); Gesture Interfaces; Motion</institution>
          ,
          <addr-line>Interfaces; HCI modeling; HCI documentation; Description, Languages.</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Gestural interactions will continue to proliferate, enabling a wide range of possibilities to interact with mobile, pervasive, and ubiquitous environments. Particularly, motion gestures are getting an increasing attention amongst researchers. Likewise, a large adoption of motion gestures is noticeable on a commercial level. Motion gestures research strives to utilize the human body potential for interaction with interactive ecosystems. Despite the innovation and development in this field, we believe that describing motion gestures remains an unsolved challenge for the community to tackle and the effort in this direction is still limited. In our research, we focus on describing the human body movements for motion gestures based on movement description languages (particularly, Labanotation). In this paper, we argue that without adequate descriptions of gestural interactions, the engineering of interactive systems for large-scale dynamic runtime deployment of existing and future interaction techniques will be greatly challenged.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        INTRODUCTION
Human Computer Interaction (HCI) research has continued to
flourish, with an expanding world of interconnected devices
and technologies driven by rich interaction capabilities. This
innovation is fueled with increasing calls for HCI researchers
to investigate new interaction possibilities. This has resulted
into an increasing innovation in Gestural studies. Gestures in
the HCI field have been closely related to human gesturing,
which is extensively studied in different fields such as
linguistics, anthropology, cognitive science, and psychology [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
EGMI 2014, 1st International Workshop on Engineering Gestures for Multimodal
Interfaces, June 17 2014, Rome, Italy.
      </p>
      <p>
        Copyright c 2014 for the individual papers by the papers’ authors. Copying
permitted only for private and academic purposes. This volume is published and copyrighted
by its editors.
http://ceur-ws.org/Vol-1190/.
Principally, gestures describe situations where body
movements are used as a means to communicate to either a
machine or a human (revised from Mulder’s definition of hand
gestures [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]).
      </p>
      <p>
        Gestures come in different forms such as motion gestures,
facial expressions, and bodily expressions [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Moreover,
they are often discussed, classified, and defined from
various viewpoints and perspectives. The major part of human
gesture classification research is focused on human discourse
[
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], but also extend to human/device dialog approach [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ],
input device properties and sensing technology [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], etc. This
diversity has been reflected on the wide range and diverse
gesture manipulation parameters, taxonomies, design spaces,
and gesture to command mappings. Hence, the complexity
to tackle many open questions regarding gestural interaction
descriptions and languages is inevitably increased.
Paradoxically, Scoditti et al. [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] pointed out that whilst sensor-based
interaction research often presents highly satisfactory results,
they often fail to support designers’ decisions and researchers
analysis. Bailly et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed a set of guidelines for
gesture-aware remote controllers based on a series of
studies and five novel interaction techniques, but the scope of
their guidelines remains limited and is not scalable to other
application domains or interaction techniques. Moreover,
researchers have pointed out that Gestural research still lacks a
well defined and clear design space for multitouch gestures
[
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] and motion gestures [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. Furthermore, the bodily
presence in HCI remains limited due to the subtlety and
complexity of of human movement, leaving an open space for further
investigations [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Principally, gestures are described and disseminated in
various forms including written material, visual clues, animated
clues, and formal description models and languages. In their
work about formal descriptions for multitouch interactions,
Hamon et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] analyzed the expressiveness of various user
interface description languages (an extension to [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]).
Principally, modeling includes mainly data description, state
representation, event representation, timing, concurrent behavior,
and dynamic instantiation. Despite the existence of various
approaches to describe touch-based interactions, the
literature lacks a similar coverage for motion gestures. An
extensive review on those approaches is out of the scope of this
paper. Herein, we target our effort to describe the movement
aspects of motion-based gestures, which we believe is not a
well exploited research direction by the HCI community.
Gesture description languages are relevant for the correct
execution of interactions by end users, the preservation of
technique by designers, the accumulation of knowledge for
the community, and the engineering of interactive systems.
Moreover, we argue that languages for describing various
movement aspects of gestures are very important resources
of context information about the gestures, which can be
utilized by interactive systems for various reasons. For instance,
filtering and selecting adequate gestural interactions could be
based on the user’s physical context. Recently, we have
proposed a shift towards completely dynamic on-the-fly
ensembles of interaction techniques at runtime [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
Interaction Ensembles approach is defined as ”Multiple interaction
modalities (i.e. interaction plugins) are tailored at runtime to
adapt the available interaction resources and possibilities to
the user’s physical abilities, needs, and context” [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Engineering an interactive system of this kind imposes new
dissemination (especially interaction description and modeling),
deployment, and adaptation requirements and challenges to
consider.
      </p>
      <p>In this paper, we discuss the use of movement description
languages for describing motion gestures and we present our
approach of choice to tackle this problem.</p>
      <p>
        BACKGROUND AND RELATED WORK
Research on utilizing movements for interaction is spread
over a wide research landscape. For instance, computer
vision studies different approaches to visually analyze and
recognize human motion on multiple levels (i.e. body parts,
whole body, and high level human activities) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Other
research projects involve affective computing to study
expressive movements as in the EMOTE model [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and
EyesWeb [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], movements visual analysis [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and representation
of movements [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The literature is also rich with examples
on utilizing movements for interactions. Rekimoto [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]
presented one of the earliest work on mapping motion (e.g.,
tilting) to navigate menus, interact with scroll bars, pan, zoom,
and to perform manipulate actions on 3D objects. The
research effort on tilting was then followed, especially in the
mobile interaction area by Harrison et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and Bartlett
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Meanwhile, Hinckley et al.’s [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] idea of using tilting
for controlling the mobile screen orientation is one of the
most widely adopted techniques implemented in many
mobile phones currently sold on the market.
      </p>
      <p>
        In their work on movement-based interactions, Loke et
al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] presented an interesting analysis on the design of
movement-based interactions from four different frameworks
and perspectives: Suchman’s framework for covering the
communicative resources for interacting humans and
machines; Benford et al.’s framework (based on Expected,
Sensed and Desired movements) for designing sensing-based
Interactions; Bellotti et al.’s framework (Address, Attention,
Action, Alignment, Accident) for sensor-based systems; and
Labanotation as one of the most popular systems of analyzing
and recording movement. In Benford et al.’s framework
”Expected” movements are the natural movements that users do,
”Sensed” movements those which can be sensed by an
interactive system, ”Desired” movements are those which
assemble commands for a particular applications. In Bellotti et al.’s
framework ”Address” refers to the communication with an
interactive system, ”Attention” indicates whether the system
is attending to the user, ”Action” defines the interaction goal
for the system, ”Alignment” refers to monitoring the system
response, and finally ”Accident” refers to errors avoidance
and recovery.
      </p>
      <p>
        The richness of human body movements makes human
movement an overwhelming subject for designing and engineering
interactions. The hand and its movements, for instance,
provide an open list of interaction possibilities. In his work,
Mulder [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] listed just a subset of hand movements that reflects
interaction possibilities, which included: Accusation (index
pointing); moving objects; touching objects; manipulating
objects; waving and saluting; pointing to real and abstract
objects; and positioning objects. Moreover, he described and
categorized hand movements into goal directed manipulation,
empty-handed gestures, and haptic exploration. This
classification reveals the potential of one individual part of
human body. The goal directed manipulation category includes
movement for changing position (e.g., lift and move),
changing orientation (e.g., revolve, twist), changing shape (e.g.,
squeeze and pinch), contact with the object (e.g., snatch and
clutch), joining objects (e.g., tie and sew), and indirect
manipulation (e.g., set and strop). The empty-handed gestures
category included examples such as twiddle and wave.
Finally, the haptic exploration category included touch, stroke,
strum, thrum, and twang. In the same work, he also indicated
that there are other types of categorization base on
communication aspects for example. Yet, this potential grows greatly
when considering the rich nature of natural interaction
techniques, as in whole body interactions and motion-based
interactions for instance.
      </p>
      <p>
        The notion of movement qualities is another well studied and
applied topic in different fields, especially in dance and
choreography. Despite the importance of movement for
interaction, the HCI field does not yet explore this notion on the
same scale. In fact, some argue that the primary
foundations of movement qualities are very poorly discussed in the
HCI literature [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], despite some recent research contributions
as James et al. (interactions technique based on dance
performance) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], Moen (applying Laban effort dance theory
for designing of movement-based interaction) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], Alaoui
et al. (movement qualities as interaction modalities) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and
Hashim et al. (Laban’s movement analysis for graceful
interaction) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The discussed work in this paper contributes to
this area of research.
      </p>
      <p>
        To our best knowledge, universal design guidelines for
motion-based interactions are not easily found in the
literature. Nonetheless, efforts to investigate and outline such
guidelines are recently reported for specific application
domains. For instance, Gerling et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed seven
guidelines for whole body interactions created based on gaming
scenarios and focused on elderly population.
      </p>
      <p>Principally, one of the foundations of the work presented in
this paper is to relay on human body movements as the central
focal point in designing, sharing and executing motion
gestures. This position puts human body movement at the core
of our approach to describe gestures and our implementation
of what we call movement profiles.</p>
      <p>
        DESCRIBING MOVEMENTS FOR MOTION GESTURES
Loke et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] have presented an analysis of people’s
movements when playing two computer games, which utilize
players’ free body movements as input sensed by a basic
computer vision. Their analysis included various ways to describe
movement, ranging from the mechanics of the moving body
in space and time, the expressive qualities of movement, the
paths of movement, the rhythm and timing, and the moving
body involved in acts of perception as part of human action
and activity. Kahol et al. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] proposed an intuitive method
to understand the creation and performance of gestures by
modeling gestures as a sequence of activity events in body
segments and joints. Once captured, the sequences can be
annotated by several different choreographers, based on their
own interpretations and styles.
      </p>
      <p>HCI researchers tend to preserve and describe the movement
aspects of newly developed gestures using direct personal
transmissions, written textual records, still visual records
(e.g., images, sketches, drawings), and animated visual
records (e.g., videos). Nevertheless, the aforementioned
methods suffer from different drawbacks, which negatively
affect the description quality, e.g., textual records are often
too ambiguous, inaccurate, or too complex to comprehend;
still visual records fail to convey timing and movement
dynamics; and animated visual records are affected greatly by
the capturing quality.</p>
      <p>
        Previously in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], we have argued that describing movement
as an interaction element for ubiquitous and pervasive
environments is a more challenging task because of the
heterogeneity of users’ needs and abilities, heterogeneity of
environment context, and media renderers availability. We have
also argued that the current documentation practices are not
fully suitable for motion gestures because of the lack of
standardized and agreed upon description methods for motion
gestures. Current practices are too static and fixed to a
particular media type, which may easily limit the target users
of the interaction technique; current methods such as direct
personal transmissions fail to scale with a massive user
population; and current practices fail to clearly reveal the required
physical abilities to perform the interactions.
      </p>
      <p>
        To demonstrate one of the many issues regarding current
documentation practices, Figure 1 and Figure 2 show two
different drawings of the same interaction technique. The
technique presented in the drawings is a simple arm swiping
gesture. This gesture requires the user to position the left arm
to the front parallel position to the ground (as a starting
position), and move it to the left side to do a left swipe (for
interaction). The two drawings depict the interaction differently
using different drawing styles, angles, and ways to depict
sequencing. Both drawings can be easily differently interpreted
by users as well as peer-designers. This causes great
variations in interaction understanding and execution. Moreover,
this style of interaction description is not machine readable,
hence challenging the design and engineering of interactive
systems that utilize gestural interaction techniques.
Formal description models and languages are also used to
describe or disseminate the developed interaction. In their work,
Hamon et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] analyzed the expressiveness of various
multitouch user interface description languages. They argued that
modeling should include data description, state
representation, event representation, timing, concurrent behavior, and
dynamic instantiation. Nonetheless, modeling and describing
the movement aspects of motion-based gestures, the focus of
this paper, is not well investigated.
      </p>
      <p>
        Proper description of movements in motion gestures should
therefore ensure a standardized machine readable and
parsable language; generation of documentation learning and
presentation material (e.g., visual records, and audio records)
based on the context of the user and his environment; and
methods for observing users’ interactions in order to provide
suitable feedback and adaptation to depict clearly the required
interaction movements and physical abilities [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Labanotation is adopted for our approach due to its flexible
expressive power and holistic power to capture movements in
terms of movement structural description, analysis of patterns
(shapes), and qualities of movement (efforts). Labanotation is
a system of analyzing and recording movement, originally
devised by Rudolf Laban in the 1920’s. It is then further
developed by Hutchinson and others at the Dance Notation Bureau
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Labanotation is used in fields traditionally associated
with the physical body, such as dance choreography, physical
therapy, drama, early childhood development, and athletics.
Additionally, Labanotation fosters great flexibility that
empowers designers to describe all or any part of movements as
required. In this paper, we particularly aim at the structural
aspects of the movement.
      </p>
      <p>
        In its current form, Labanotation is a visual notation system
where symbols for body movements are written on a vertical
”body” staff. Each element in the notation has a dedicated
Labanotation symbol, which is used to present and document
various movement qualities. Figure 3 illustrates the
Labanotation staff. The staff is used as the layout for all involved
movements. Each column, from inside out, presents a
different body part. Column (1) presents the support (i.e., the
distribution of body weight on the ground). Columns (2) to (4)
present leg, body, and arm movements respectively. Column
(5) and additional columns can be defined by the designers as
required. The most right column is defined for head
movements. The designer is still able to change this order as
required by redefining any columns except (1) and (2). The staff
is split into different sections. The symbols before the double
lines, indicated by (6), present the start position. Moreover,
the movements components appear after the position lines in
terms of measures (horizontal lines as in (8)) and beats
(horizontal short lines as in (7)). The measures and beats define
the timing of the movements. The right side and the left sides
of the staff correspond to the two sides of the body involved.
In Figure 4, a simple 3Gear1 pinching gesture for the right
hand is modeled in Labanotation and its corresponding XML
representation is presented in Listing 5. The Figure 4 is read
as follows: (1) The right arm starts at a 90-degree angle to the
rest of the body pointing forward. (2) The palm of the hand
points to the left and should remain so during the interaction.
(3) The right hand is naturally curved. (4) The right hand is
curved and the fingers tips touch each other. The position of
the fingers should be held for short time. (5) The hand returns
to the natural curve quickly with the fingers naturally spread.
The visual notation aims at a human readable approach for
describing and reading movements, but is not adequately
machine readable. Therefore, we have designed a
compliant XML scheme that is both machine and human readable.
1http://www.threegear.com, accessed on 03.04.2014
There have been a few previous research attempts to provide
XML presentation of Labanotation such as MovementXML
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and LabanXML [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] in the area of dance representation.
Nonetheless, the efforts were neither aimed at describing
gestural interactions nor have they been widely adopted.
This scheme allows translating the notation to a machine
readable representation of the motion gesture description.
Clearly, the representation illustrated in Figure 4 is not
targeted at end users due to its speciality. The representation (in
its visual and XML code) provides an exact description of the
movement that can be only correctly interpreted by
interaction designers and developers, as well as interactive systems.
Nonetheless, user friendly readable descriptions for end users
are possible to be generated automatically based on the XML
code by interactive systems (a detailed discussion in this
direction is out of the scope of this paper).
      </p>
      <p>Generally, increasing the description details will result into
a fine preservation and execution of movements details.
Nonetheless, this inevitably causes a large movement profile
that results into an increasing complexity of reading and
interpretation. On contrary, reduced details result into a
simple movement description that is easy to read and interpret.
Nonetheless, this leads to losing the details of movements.
LABANOTATION XML SCHEME
Our approach based on Labanotation aims at a robust and
standardized description of movements in motion gestures,
whereby the transmission and preservation of motion gestures
become possible. Nonetheless, the modeling of Labanotation
is challenging due to the extensibility of the notation, size,
and variations of symbols.</p>
      <p>In the scope of this work, a subset of Labanotation is
considered. Nonetheless, the extensibility of this scheme is
still possible. The current scheme mainly targets the
following structural elements: direction symbols, pins and
contact hooks, space measurement signs, turn symbols,
vibration symbols, body hold sign, back-to-normal sign,
releasecontact sign, path signs, relationship bows, room-direction
pins, joint signs, area signs, limb signs, surface signs, a
universal object pre-sign, dynamic signs, and accent signs.
Figure 6 (left) illustrates an overview over the movement
profile XML scheme. The original Labanotation naming is
preserved to insure compatibility and readability of the scheme.
As shown in the figure, the staff is defined in terms of
timing information (measures and timing) and the body parts
involved (by defining the columns), and movement components
are defined in the movements element. The movements
element contains a collection of elements to define the
individual movements, path, the movement directions, relationships,
and phrasing (connecting individual movements together).
Figure 6 (right) illustrates a close overview on the movement
element. In this element, a single individual movement is
fully described. The information modeled includes
placement in the score (defined by the column element), timing
information (beats, measures, and execution duration), the
body part(s) involved (defined by the preSign), and
movement quality such as direction, space, turn, and vibration. The
number and detailed level of movements modeled depend on
the designer. The design should model just enough
information for ideal execution of the movement.</p>
      <p>DISCUSSION
Describing movements for motion gestures is a challenging
process and imposes a number of open issues (only some are
discussed in this paper):</p>
      <p>
        Support of dynamic interactive systems: The lack of
adequate interaction documentation and dissemination leads
inevitably to challenge the design and engineering of
interactive systems. Documentation can be used to extract
information about the type of movements involved in the
interaction, involved body parts, adequate interaction
execution, etc. The absence of such information will
necessarily lead to burden the deployment of interaction techniques
in automated interactive systems, especially processes such
as context acquisition, reasoning, interaction filtering, etc.
are greatly hindered. Good record-keeping of motion
gestures should guarantee to preserve and transfer the
technique to users and other peer designers without
endangering the originality and vital aspects of the technique.
The tension between formal and empirical movement
descriptions: Formal interface description languages
support interaction at the development as well as the
operation phase, while conventional empirical or semiformal
techniques lack to provide adequate and sufficient insights
about the interaction (e.g., comparing two design options
with respect to the reliability of the human-system
cooperation) [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Those techniques are more susceptible to
losing parts of the movements, overly complicated
descriptions, losing timing information, etc. Nonetheless, a wide
adoption of formalized languages amongst motion
interaction designers is challenged by the potential complexity of
language learning and movements description.
      </p>
      <p>
        Meeting future challenges: New interactive systems are
targeted to achieve ad-hoc composition of multiple
interaction techniques; de-couple the close binding between
devices, interaction techniques, and applications; and address
user physical needs and preferences [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. This shift
imposes new requirements and challenges the current
practices for describing motion gestures. To meet those
challenges, gestures should by transparent to reflect their
internal functionality and physical requirements for intelligent
interactive systems.
      </p>
      <p>Limited research effort: We argue that this area of
research requires a lot of attention for the community
including: a better understanding of gestures and their
requirements; guidelines for describing gestures; new
authoring and design tools for motion gestures; and better
understanding of the users’ learning habits and practices
for learning motion gesture.</p>
      <p>CONCLUSION
In this paper, we have argued that adequate movements
description of motion gestures is very relevant to the correct
execution of interactions by end users, the preservation of
technique by designers, the accumulation of knowledge for
the community, and most importantly for the process of
designing and engineering interactive systems. Moreover,
languages for describing the movement aspects of gestures are
very important resource of context information about the
gestures, which can be utilized by interactive systems for
interaction filtering, adaptation, and dynamic on-the-fly deployment
at runtime. Herein, Labanotation as a flexible and extensible
movement documentation system is adopted for describing
the movements aspects of gestural interactions.</p>
      <p>
        FUTURE WORK
We continue our work on an authoring tool called Interaction
Editor [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which aims to ease the workflow for describing
the movement aspects of gestural interactions for gesture
developers and designers. Moreover, one of our active areas
of research continues to investigate the real practices for
describing gestural interactions applied by the HCI community.
ACKNOWLEDGEMENT
This work was partially supported by the Graduate School
for Computing in Medicine and Life Sciences funded by
Germany’s Excellence Initiative [DFG GSC 235/1] and by
vffr (Verein zur Fo¨rderung der Rehabilitations- Forschung
in Hamburg, Mecklenburg-Vorpommern und
SchleswigHolstein e.V.) We also thank Michal Janiszewski for drawing
the design sketches in Figure 1 and Figure 2.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aggarwal</surname>
            ,
            <given-names>J. K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          <article-title>Human motion analysis: A review</article-title>
          .
          <source>Computer Vision and Image Understanding</source>
          <volume>73</volume>
          (
          <year>1999</year>
          ),
          <fpage>428</fpage>
          -
          <lpage>440</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Alaoui</surname>
            ,
            <given-names>S. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caramiaux</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Serrano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bevilacqua</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <article-title>Movement qualities as interaction modality</article-title>
          .
          <source>In Proceedings of the Designing Interactive Systems Conference, DIS '12</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (Newcastle, UK,
          <year>2012</year>
          ),
          <fpage>761</fpage>
          -
          <lpage>769</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Altakrouri</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Gro¨schner, J., and
          <string-name>
            <surname>Schrader</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Documenting natural interactions</article-title>
          .
          <source>In CHI '13 Extended Abstracts on Human Factors in Computing Systems, CHI EA '13</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2013</year>
          ),
          <fpage>1173</fpage>
          -
          <lpage>1178</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Altakrouri</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Schrader</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Towards dynamic natural interaction ensembles</article-title>
          .
          <source>In Fourth International Workshop on Physicality (Physicality</source>
          <year>2012</year>
          )
          <article-title>co-located with British HCI 2012 conference, A. D. Devina Ramduny-Ellis and</article-title>
          S. Gill, Eds. (Birmingham, UK, 09
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Badler</surname>
            ,
            <given-names>N. I.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Smoliar</surname>
            ,
            <given-names>S. W.</given-names>
          </string-name>
          <article-title>Digital representations of human movement</article-title>
          .
          <source>ACM Comput. Surv</source>
          .
          <volume>11</volume>
          ,
          <issue>1</issue>
          (Mar.
          <year>1979</year>
          ),
          <fpage>19</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bailly</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vo</surname>
          </string-name>
          , D.-B.,
          <string-name>
            <surname>Lecolinet</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Guiard</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <article-title>Gesture-aware remote controls: guidelines and interaction technique</article-title>
          .
          <source>In Proceedings of the 13th international conference on multimodal interfaces</source>
          ,
          <source>ICMI '11</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2011</year>
          ),
          <fpage>263</fpage>
          -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Bartlett</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bartlett</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          <string-name>
            <surname>Rock</surname>
          </string-name>
          <article-title>'n' scroll is here to stay</article-title>
          .
          <source>Computer Graphics and Applications</source>
          (
          <year>2000</year>
          ),
          <fpage>40</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Camurri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricchetti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Trocca</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>EyesWeb-toward gesture and affect recognition in dance/music interactive systems</article-title>
          .
          <source>In Multimedia Computing and Systems</source>
          ,
          <year>1999</year>
          . IEEE International Conference on, vol.
          <volume>1</volume>
          (
          <issue>Florence</issue>
          , Italy,
          <year>Jul 1999</year>
          ),
          <fpage>643</fpage>
          -
          <lpage>648</lpage>
          vol.
          <volume>1</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Chi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Badler</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <article-title>The emote model for effort and shape</article-title>
          .
          <source>In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '00</source>
          , ACM Press/Addison-Wesley Publishing Co. (New York, NY, USA,
          <year>2000</year>
          ),
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gerling</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Livingston</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nacke</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Mandryk</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Full-body motion-based game interaction for older adults</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2012</year>
          ),
          <fpage>1873</fpage>
          -
          <lpage>1882</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Hamon</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palanque</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deleris</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Barboni</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Formal description of multi-touch interactions</article-title>
          .
          <source>In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS '13</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2013</year>
          ),
          <fpage>207</fpage>
          -
          <lpage>216</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Harrison</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fishkin</surname>
            ,
            <given-names>K. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gujar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mochon</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Want</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Squeeze me, hold me, tilt me! an exploration of manipulative user interfaces</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '98</source>
          , ACM Press/Addison-Wesley Publishing Co. (New York, NY, USA,
          <year>1998</year>
          ),
          <fpage>17</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Hashim</surname>
            ,
            <given-names>W. N. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noor</surname>
            ,
            <given-names>N. L. M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Adnan</surname>
            ,
            <given-names>W. A. W.</given-names>
          </string-name>
          <article-title>The design of aesthetic interaction: Towards a graceful interaction framework</article-title>
          .
          <source>In Proceedings of the 2Nd International Conference on Interaction Sciences: Information Technology, Culture and Human</source>
          ,
          <source>ICIS '09</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2009</year>
          ),
          <fpage>69</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Hatol</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Movementxml</surname>
          </string-name>
          :
          <article-title>A representation of semantics of human movement based on labanotation</article-title>
          .
          <source>Master's thesis</source>
          , SIMON FRASER UNIVERSITY, Burnaby,
          <string-name>
            <surname>BC</surname>
          </string-name>
          , Canada,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Hinckley</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Input technologies and techniques. In The human-computer interaction handbook</article-title>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Jacko</surname>
          </string-name>
          and
          <string-name>
            <surname>A</surname>
          </string-name>
          . Sears, Eds. L. Erlbaum Associates Inc., Hillsdale, NJ, USA,
          <year>2003</year>
          , ch.
          <source>Input technologies and techniques</source>
          ,
          <volume>151</volume>
          -
          <fpage>168</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Hinckley</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pierce</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinclair</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Horvitz</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Sensing techniques for mobile interaction</article-title>
          .
          <source>In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, UIST '00</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2000</year>
          ),
          <fpage>91</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Hutchinson</surname>
            ,
            <given-names>A. Labanotation</given-names>
          </string-name>
          <article-title>The System of Analyzing and Recording Movement</article-title>
          , 4th ed. Routledge, NewYork and London,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>James</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ingalls</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qian</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olsen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Whiteley</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rikakis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Movement-based interactive dance performance</article-title>
          .
          <source>In Proceedings of the 14th Annual ACM International Conference on Multimedia, MULTIMEDIA '06</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2006</year>
          ),
          <fpage>470</fpage>
          -
          <lpage>480</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kahol</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tripathi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Panchanathan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Documenting motion sequences with a personalized annotation system</article-title>
          .
          <source>IEEE MultiMedia 13</source>
          ,
          <issue>1</issue>
          (
          <year>2006</year>
          ),
          <fpage>37</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Karam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and m. c. schraefel.
          <article-title>A taxonomy of gestures in human computer interactions</article-title>
          .
          <source>Technical report</source>
          , University of Southampton, Southampton, United Kingdom,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Loke</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larssen</surname>
            ,
            <given-names>A. T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Labanotation for design of movement-based interaction</article-title>
          .
          <source>In Proceedings of the 2nd Australasian Conference on Interactive Entertainment, IE</source>
          <year>2005</year>
          ,
          <article-title>Creativity</article-title>
          &amp; Cognition Studios Press (Sydney, Australia,
          <year>2005</year>
          ),
          <fpage>113</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Loke</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larssen</surname>
            ,
            <given-names>A. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Edwards</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Understanding movement for interaction design: frameworks and approaches</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          <volume>11</volume>
          ,
          <issue>8</issue>
          (
          <year>2006</year>
          ),
          <fpage>691</fpage>
          -
          <lpage>701</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Moen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>From hand-held to body-worn: Embodied experiences of the design and use of a wearable movement-based interaction concept</article-title>
          .
          <source>In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2007</year>
          ),
          <fpage>251</fpage>
          -
          <lpage>258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Mulder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Hand gestures for hci. Hand Centered Studies of Human Movement Project (</article-title>
          <year>1996</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Nakamura</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hachimura</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>An XML representation of Labanotation, LabanXML, and its implementation on the Notation Editor LabanEditor2</article-title>
          .
          <article-title>Review of the National Center for Digitization (</article-title>
          <source>Online Journal)</source>
          <volume>9</volume>
          (
          <issue>2006</issue>
          ),
          <fpage>47</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Navarre</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palanque</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ladry</surname>
            ,
            <given-names>J.-F.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Barboni</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>ICOs: A model-based user interface description technique dedicated to interactive systems addressing usability, reliability and scalability</article-title>
          .
          <source>ACM Trans. Comput.-Hum. Interact</source>
          .
          <volume>16</volume>
          ,
          <issue>4</issue>
          (Nov.
          <year>2009</year>
          ),
          <volume>18</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          :
          <fpage>56</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Pruvost</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heinroth</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellik</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Minker</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <article-title>User Interaction Adaptation within Ambient Environments, next generation intelligent environments: ambient adaptive systems ed</article-title>
          . Springer, Boston (USA),
          <year>2011</year>
          , ch.
          <volume>5</volume>
          ,
          <fpage>153</fpage>
          -
          <lpage>194</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Rekimoto</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Tilting operations for small screen interfaces</article-title>
          .
          <source>In Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology, UIST '96</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>1996</year>
          ),
          <fpage>167</fpage>
          -
          <lpage>168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Ruiz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lank</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>User-defined motion gestures for mobile interaction</article-title>
          .
          <source>In Proceedings of the 2011 annual conference on Human factors in computing systems, CHI '11</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2011</year>
          ),
          <fpage>197</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Scoditti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blanch</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Coutaz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>A novel taxonomy for gestural interaction techniques based on accelerometers</article-title>
          .
          <source>In the 15th International Conference on Intelligent User Interfaces (IUI '11)</source>
          , ACM (New York, NY, USA,
          <year>2011</year>
          ),
          <fpage>63</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Wobbrock</surname>
            ,
            <given-names>J. O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morris</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          , and Wilson,
          <string-name>
            <surname>A. D.</surname>
          </string-name>
          <article-title>User-defined gestures for surface computing</article-title>
          .
          <source>In Proceedings of the 27th international conference on Human factors in computing systems, CHI '09</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (New York, NY, USA,
          <year>2009</year>
          ),
          <fpage>1083</fpage>
          -
          <lpage>1092</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>