<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Model-driven Method and a Tool for Developing Gesture-based Information System Interfaces</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Otto Parra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergio España</string-name>
          <email>sergio.espana@pros.upv.es</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oscar Pastor</string-name>
          <email>opastor@pros.upv.es</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, Universidad de Cuenca</institution>
          ,
          <country country="EC">Ecuador</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>PROS Research Centre, Universitat Politècnica de València</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Considering the technological advances in touch-based devices, gesture-based interaction has become a prevalent feature in many application domains. Information systems are starting to explore this type of interaction. Currently, gesture specifications are hard-coded by developers at the source code level, hindering its reusability and portability. Similarly, defining new gestures in line with users' requirements is further complicated. This paper describes a model-driven approach to include gesture-based interaction in desktop information systems and a tool prototype to: capture user-sketched multi-stroke gestures and transform them into a model, automatically generating the gesture catalogue for gesture-based interaction technologies and gesture-based interface source code. We demonstrate our approach in several applications, ranging from case tools to form-based information systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Model-driven architecture</kwd>
        <kwd>gesture-based interaction</kwd>
        <kwd>multi-stroke gestures</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        New devices come together with new types on interfaces (e.g. based on gaze,
gesture, voice, haptic, brain-computer interfaces). Their aim is to increase the
naturalness of interaction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], although this is not exempt from risks. Due to
the popularity of touch-based devices, gesture-based interaction is slowly
gaining ground on mouse and keyboard in domains such as video games and
mobile apps. Information systems (IS) are likely to follow the trend,
especially, in supporting tasks performed outside the office [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Several issues may hinder the wide adoption of gesture-based interaction
in complex information systems engineering. Gesture-based interfaces have
been reported to be more difficult to implement and test than traditional
mouse and pointer interfaces [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Gesture-based interaction is supported at the
source code level (typically third-generation languages) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This involves a
great coding and maintenance effort when multiple platforms are targeted, has
a negative impact on reusability and portability, and complicates the
definition of new gestures. Some of these challenges can be tackled by following a
model-driven development (MDD) approach provided that gestures and
gesture-based interaction can be modelled and that it is possible to automatically
generate the software components that support them.
      </p>
      <p>This paper introduces an MDD approach and a tool for gesture-based IS
interface development, which is intended to allow software engineers focusing
on the key aspects of gesture based information system interfaces; namely,
defining gestures and specifying gesture-based interaction. Coding and
portability efforts are alleviated by means of model-to-text (M2T) transformations.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>State of art</title>
    </sec>
    <sec id="sec-3">
      <title>State of the art on gesture representation</title>
      <p>
        The representation of gestures according to related literature can be classified
into some categories: (a) based on regular expressions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]: a gesture is
defined by means of regular expressions formed by elements such as ground
terms, operators, symbols, etc.; (b) based on a language specification [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]:
XML is used as reference to describe gestures; (c) based on demonstration
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: developers to define gestures, test the generated code, refine it, and, once
they are satisfied with them, include the code in the applications.
      </p>
      <p>
        In this research work we propose a model-driven approach in order to
represent gestures with a high-level of abstraction, enabling
platformindependence and reusability. By providing the proper transformations, it is
possible to target several gesture recognition technologies. We focus on
userdefined, multi-stroke, semaphoric gestures [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
2.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>The role of gesture-based interfaces in IS engineering</title>
      <p>
        Gesture-based interfaces can play two major roles in IS engineering,
depending on whether we intend to incorporate this natural interaction into (i) CASE
tools or (ii) into the IS themselves. In the former case, the interest is to
increase the IS developers’ efficiency, whereas in the latter the aim is to
increase IS usability, especially in operations in the field, where the lack of a
comfortable office space reduces the ergonomics of mouse and keyboard.
In both cases, gesture-based interface development methods and tools are
needed. Some examples of methods and tools are described in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
where the authors propose a method to integrate gesture-based interaction in
an interface.
      </p>
      <p>
        In this work, we propose a similar flow to that of [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], but automate the
implementation of gesture-based interfaces by means of model
transformations. In future work, we plan to provide support to the ergonomic
principles by [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-5">
      <title>The gestUI method</title>
      <p>gestUI is a user-driven and iterative method that follows the MDD paradigm.
The main artefacts are models which are conform to the Model-Driven
Architecture, a generic framework of modelling layers that ranges from abstract
specifications to the software code (indicated at the top of Fig. 1).</p>
      <p>
        The computation-independent layer is omitted because gestUI already
assumes that the IS is going to be computerised. Note that gestUI is expected
to be integrated into a full interface development method (represented with
generic activities and artefacts in grey). Such a method can either be
modeldriven or code-centric. gestUI is user-driven because users participate in all
non-automated activities; and it is iterative because it intends to discover the
necessary gestures incrementally and provides several loopbacks. In the
platform-independent layer, the gestures are defined (activity A1 in Fig. 1) by
the developer but, preferably, in collaboration with representative users of the
IS. Gestures are defined by sketching and are stored in the ‘Gesture catalogue
model’, and is part of a larger ‘Interaction requirements’ specification. In the
platform specific layer, a concrete gesture-recognition platform is selected
(we currently support three platforms: quill [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], $N [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and iGesture [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]).
The ‘Platform-specific gesture specification’ (PSGS) is a machine-readable
file format that can be interpreted by a gesture recognition tool. This
specification can be automatically generated from the ‘Gesture catalogue model’
(during A3). The interface is also designed in this layer, so now the
gesturebased interaction can also be determined (A2) in collaboration with the user.
This mainly consists of defining correspondences between gestures and
interface actions. In the code layer, the developer and the user can test the gestures
using the gesture recognition tool (A5). The ‘Gesture based interface’ is
automatically generated from the platform-specific layer artefacts (A4). The tool
generates components (e.g. Java code) that are embedded into the IS interface.
4
      </p>
    </sec>
    <sec id="sec-6">
      <title>The gestUI tool</title>
      <p>The gestUI tool is developed using Java and Eclipse Modelling Framework.
As shown in Fig. 2, the tool is structured in three modules. The numbers in
brackets indicate the method activity each component supports. The method’s
internal products are not shown. The relationship with the external gesture
recogniser is represented.</p>
    </sec>
    <sec id="sec-7">
      <title>Gesture catalogue definition module</title>
      <p>It supports the definition of new multi-stroke gestures by means of an
interface implemented in Java in which the user sketches the gestures. The set of
gestures sketched by the user constitutes the ‘Gesture catalogue model’,
conforms to the metamodel defined in this work (Fig. 3).
4.2</p>
    </sec>
    <sec id="sec-8">
      <title>Model transformation module</title>
      <p>It requires as data: the ‘Gesture catalogue model’, the target technology
specified by the developer, and the target folder to save the output. Depending on
the target technology, a different M2T transformation is executed which
creates the PSGS, in the corresponding file format (i.e. XML for $N and iGesture
and GDT 2.0 for quill). The transformation rules are written in Acceleo. The
PSGS can be imported in a third-party gesture recogniser to test the gestures.</p>
    </sec>
    <sec id="sec-9">
      <title>Gesture-action correspondence definition module</title>
      <p>It allows the developer and the user to specify what action to execute
whenever the gesture-based IS interface recognises a gesture. In a model-based IS
interface development, the actions are specified in the interface model. In a
code-centric interface development, they are implemented in the interface
itself. We currently provide automated support to code-centric developments
made in Java; that is, the gestUI module parses the source code of the user
interface to obtain a list of actions. This module therefore requires two inputs:
the previously created ‘Gesture catalogue model’ and the user interface (e.g. a
Java code). The output of this module is the ‘Gesture-based interaction model’
and the same source code but now supporting the gesture-based interaction.</p>
      <p>When generating the user interface Java source code, many references are
included (e.g., to libraries to manage gestures, to libraries of the
gesturerecognition technology (e.g. $N)), and some methods are added (e.g., to
execute the gesture-action correspondence, and to capture gestures). Additionally,
the class definition is changed to include some listeners, then the source code
should obviously be compiled.
5</p>
    </sec>
    <sec id="sec-10">
      <title>Demonstration of the method and tool</title>
      <p>We demonstrate the integration of gestUI within a code-centric interface
development method. For illustration purposes, we use a fictional, simple
university management case and we narrate the project as if it actually happened.
Fig. 4 shows the domain class diagram of a university with several
departments, to which teachers are assigned and which manage classrooms. For the
sake of brevity, we will just consider the two screens; namely, the initial and
department management screens.</p>
      <p>In the first method iteration, the university representatives tell the
developer that they would like the gestures to resemble parts of the university logo.
Thus, they use the Gesture catalogue definition module to create a first
version of the ‘Gesture catalogue model’ containing these three gestures:  for
departments, || for teachers and  for classrooms. However, when the first
interface design is available (see sketch in Fig. 5), they soon realise that other
gestures are needed. This way, by defining new gestures, and after testing
them, they determine that navigation will be done by means of the
abovementioned gestures, but that similar actions that appear across different
screens shall have the same gestures (e.g. the gesture  shall be used to create
both new departments and teachers).</p>
      <p>The Model transformation module allows generating the PSGS for any of
the available gesture-based recognition technologies (i.e. $N, quill and
iGesture). The developer only needs to choose a single technology but we chose to
demonstrate the multiplatform features of the gestUI method by generating
the three gesture files. Using the appropriate tool, the users can test the
gestures. Fig. 6 shows the gestures being recognised by the SN, quill and
iGesture tools so the gestures have been properly converted by the Model
transformation module.</p>
      <p>The developer assigns the gesture-action correspondence in collaboration
with the user, supported by the Gesture-action correspondence definition
module. The correspondences are informally shown in Fig. 5, next to each
action button. Once the Java source code of the traditional interface is
available, then the components that support the gesture-based interaction are
generated. In this case, the chosen underlying gesture-recognition technology is $N;
the users felt more comfortable with multi-stroke gestures (especially with
regards to tracing some letters and symbols) so quill was discarded. The final
IS interface consists of several screens that allow managing university
information. Users can still interact with the IS in the traditional way (i.e. using the
mouse), but now, they can also draw the gestures with a finger on the
touchbased screen in order to execute the actions. Fig. 7 represents a specific
interaction with the IS interface in which a department is being created.
We describe gestUI, a model-driven method, and the tool that supports it to
specify multi-stroke gestures and automatically generating the information
system components that support the gesture-based interaction. We validated
the method and tool by applying them to a case and generated the
Platformspecific gesture specification for three gesture-recognition technologies, to
illustrate the multiplatform capability of the tool. The gestures were
successfully recognised by the corresponding tools. We then automatically generated
the final gesture-based interface components and integrated them into the IS
interface. The advantages of the proposal are: platform independence enabled
by the MDD paradigm, the convenience of including user-defined symbols
and its iterative and user driven approach. Its main current limitations are
related to the target interface technologies (currently, only Java) and the fact
that multi-finger gestures are not supported. These limitations will be
addressed in future work. We also plan further validation by applying the
approach to the development of a real IS and to extending a CASE tool with
gesture-based interaction (the Capability Development Tool being developed
in the FP7 CaaS project). We also plan to integrate gestUI into a full-fledged
model-driven framework capable of automatically generating the presentation
layer, in order to extend it with gesture-based interaction modelling and code
generation.
The author is grateful to his supervisors Sergio España and Óscar Pastor for
their invaluable support and advice. This work has been supported by
SENESCYT and Univ. de Cuenca - Ecuador, and received financial support
from Generalitat Valenciana under Project IDEO (PROMETEOII/2014/039).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wigdor</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Wixon</surname>
          </string-name>
          , Brave NUI world:
          <article-title>designing natural user interfaces for touch and gesture</article-title>
          , USA: Morgan Kaufmann Publishers Inc.,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Fujitsu</surname>
          </string-name>
          , “
          <article-title>Touch- and gesture-based input to support field work,” Fujitsu Laboratories Ltd</article-title>
          .,
          <volume>18</volume>
          02
          <year>2014</year>
          . [Online]. Available: http://phys.org/news/2014- 02
          <article-title>-touch-gesture-based-field</article-title>
          .
          <source>html. [Accessed 24 11</source>
          <year>2014</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hesenius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Griebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gries</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Gruhn</surname>
          </string-name>
          , “
          <article-title>Automating UI Tests for Mobile Applications with Formal Gesture Descriptions,”</article-title>
          <source>Proc. of 16th Conf</source>
          .
          <article-title>on Human-computer interaction with mobile devices</article-title>
          &amp; services, pp.
          <fpage>213</fpage>
          -
          <lpage>222</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Khandkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Sohan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sillito</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Maurer</surname>
          </string-name>
          , “
          <article-title>Tool support for testing complex multi-touch gestures,” in ACM International Conference on Interactive Tabletops and Surfaces</article-title>
          , ITS'10, NY, USA,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Spano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cisternino</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          , “
          <article-title>A Compositional Model for Gesture Definition,” LNCS Human-Centered Soft</article-title>
          . Eng., vol.
          <volume>7623</volume>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>52</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hartmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>DeRose and M. Agrawala</surname>
          </string-name>
          , “Proton++:
          <string-name>
            <given-names>A Customizable</given-names>
            <surname>Declarative Multitouch Framework</surname>
          </string-name>
          ,” in UIST'
          <volume>12</volume>
          , Cambridge, USA,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Ideum</surname>
            , “GestureML,” Ideum, 22
            <given-names>November</given-names>
          </string-name>
          <year>2014</year>
          . [Online]. Available: http://www.gestureml.org/.
          <source>[Accessed 6 December</source>
          <year>2014</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Anthony</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. O.</given-names>
            <surname>Wobbrock</surname>
          </string-name>
          , “
          <article-title>A Lightweight Multistroke Recognizer for User Interface Prototypes,”</article-title>
          <source>Proc. of Graphics Interface</source>
          , pp.
          <fpage>245</fpage>
          -
          <lpage>252</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Karam and M. C. Schraefel</surname>
          </string-name>
          , “
          <article-title>A taxonomy of Gestures in Human-Computer Interaction</article-title>
          ,” in Retrieved from http://eprints.soton.ac.uk/261149/,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Guimaraes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Farinazzo</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          , “
          <article-title>A Software Development Process Model for Gesture-Based Interface,”</article-title>
          <source>in IEEE International Conference on Systems, Man, and Cybernetics</source>
          , Seoul, Korea,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Storring</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Moeslund</surname>
          </string-name>
          and E. Granum, “
          <article-title>A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for Man-Machine Interaction</article-title>
          ,” Aalborg University, Aalborg, Denmark,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Long</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Landay</surname>
          </string-name>
          ,
          <article-title>Quill: a gesture design tool for pen-based user interfaces</article-title>
          , Berkeley: University of California,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Signer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Norrie</surname>
          </string-name>
          and U. Kurmann, “iGesture:
          <string-name>
            <given-names>A General</given-names>
            <surname>Gesture Recognition Framework</surname>
          </string-name>
          ,”
          <source>in Proceedings of ICDAR 2007, 9th Int. Conference on Document Analysis and Recognition, Brazil</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>