=Paper= {{Paper |id=Vol-3404/paper3 |storemode=property |title=GEStory: An Atlas for User-Defined Gestures as an Interactive Design Space |pdfUrl=https://ceur-ws.org/Vol-3404/paper3.pdf |volume=Vol-3404 |authors=Santiago Villarreal-Narvaez |dblpUrl=https://dblp.org/rec/conf/eics/Villarreal-Narvaez22 }} ==GEStory: An Atlas for User-Defined Gestures as an Interactive Design Space== https://ceur-ws.org/Vol-3404/paper3.pdf
GEStory: An Atlas for User-Defined Gestures as an
Interactive Design Space
Santiago Villarreal-Narvaez1
1
 Louvain Research Institute in Management and Organizations, Université catholique de Louvain, Place des Doyens 1,
Louvain-la-Neuve, 1348, Belgium


                                         Abstract
                                         How can we provide designers and developers with some support to identify the most appropriate
                                         gestures for gestural user interfaces depending on their context of use? To address this research question,
                                         GEStory was developed to address this research questions. It is an on-line atlas of gestures resulting
                                         from gesture elicitation studies with four main functionalities: (1) search for user-defined gestures
                                         identified in such studies by querying its features in an interactive design space, (2) show the preferred
                                         gestures and their characteristics for a given action (represented through a referent) with a given device
                                         in an environment and/or carried out with various body parts, (3) compare the existing studies and (4)
                                         suggest adding new studies. To feed GEStory, two Systematic Literature Reviews (SLR) were performed:
                                         a macroscopic analysis of 216 papers on their metadata, such as authors, definitions, year of publication,
                                         type of publication, participants, referents, parts of the body (finger, hand, wrist, arm, head, leg, foot, and
                                         whole body), number of proposed gestures; a microscopic analysis of 267 papers analyzing and classifying
                                         the referents, the final gestures coming out the consensus set, their representation and characterization.
                                         It also proposed an assessment of credibility of these studies as a measure for categorizing their strength
                                         of impact. GEStory acts as an interactive design space for gestural interaction to inform researchers
                                         and practitioners on existing preferred gestures in different contexts of use, and enable the identification
                                         of gaps and opportunities for new studies.

                                         Keywords
                                         Human–computer interaction, Gesture interaction, Gesture Elicitation Study, Gesture Preferences.




1. Context of the problem
Gesture-based interaction has acquired its letters of nobility, e.g.,in terms of user-defined
gestures [1] made available through numerous Gesture Elicitation Studies (GES) [2], but also
in terms of algorithms for gesture recognition [3]. Yet, identifying which gestures would be
the most appropriate for a given task in a given context of use remains a challenging task
for researchers and practitioners, such as designers and developers. Determining these most
appropriate gestures brings us back to the question of identifying the gestures according to
the end user, the interactive task, the gesture capture device, and the physical environment.
Fortunately, several GES address this question, but too often for a context of use that is specified
either too fuzzily or too specifically. These GES are heterogeneous, sometimes inconsistent or

EICS’22: The 14th ACM SIGCHI PhD Workshop on Engineering Interactive Computing Systems, June 21–24, 2022, EICS,
Sophia Antipolis, France
Envelope-Open santiago.villarreal@uclouvain.be (S. Villarreal-Narvaez)
Orcid 0000-0001-7195-1637 (S. Villarreal-Narvaez)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
overlapping, rarely incremental and complementary.
   Let us consider an example showing how a GES is conducted: Suppose the case that a designer
wants to develop a gesture user interface for a new smartwatch prototype that allows users
to control 5 actions (materialized as referents) in the environment of a smart car: (t1) turn
on the radio, (t2) turn on air conditioning, (t3) answer a call, (t4) turn off the radio, and (t5)
turn off the air conditioning. For the study a group of potential users 𝑃 is brought together,
perhaps 20 in number, i.e.,|𝑃| = 20. Each participant is shown the context existing before and
after performing each action: e.g.,The radio is off (before) and the radio is on for the referent t1
(after). The participant is asked to propose a gesture command using the smart watch to execute
the referred action. At the end of the GES, the designer, who compiled a set of 100 gesture
proposals = 5 (referents) × 20 (proposals), now looks at the set of gestures elicited for each
function to understand which gestures are in agreement among participants. If the agreement
is substantial and the sample of participants is representative enough of the target population
that the gesture-based interface is targeting, then the designer is expecting that the prompts are
intuitive and that other users, who were not part of the GES, are likely to guess, to learn easily,
and hopefully to prefer the same types of gestures. Although these GES are numerous [2], they
do not answer all design questions as they do not cover many contexts of use. Consequently,
the following research question emerges from this paradoxical situation:
   RQ=how can we provide designers and developers with some support to identify the most
appropriate gestures for gesture-based user interfaces depending on their context of use?
   The expected contributions of our doctoral thesis are as follows:
   1. Two SLRs related to gestural interaction: one on GES metadata and one on gestures
      characteristics, along with a classification and discussion. An on-line Zotero collection of
      relevant papers and classification.
   2. New methodological aspects for conducting a GES and new GES [4, 5, 6, 7, 8, 9].
   3. GEStory, an on-line web application serving as a repository for gesture elicitation studies
      and their results.
   4. A validation of GEStory based on multiple queries answering the initial research question.
   5. A transition between GEStory and GESistant. If the result of a search in GEStory is zero
      gestures, the researcher could export these criteria to GESistant to conduct a new GES
      distributed in time and space.


2. Related Work
Regarding GES surveys and reviews, there are only three such studies: Vuletic et al. [10]
conducted a systematic literature review (SLR) on hand gestures for user interfaces, but without
having GES as the focus of their investigation; Vogiatzidakis and Koutsabasis first performed a
GES review [11] for mid-air interaction, then a SLR [12] with a corpus of 𝑁=47 papers. These
two studies are limited in scope in terms of gestures covered and in terms of investigation
methods.
  Regarding software tools, there are several software pursuing different goals than ours.
GestMan [13] support practitioners in creating and managing 2D stroke gesture datasets, but
do not exploit them in GES. GECKo [14], and more extensively, Omnis Praedictio [15]) support
designers in evaluating important characteristics of stroke gestures, their consistency and their
features such as time production respectively. GestureMap [16] provides visual analytics of
Kinect-based full body gestures to analyze their similarities and differences. GestuRING [17]
compiles an inventory of all ring gestures [18] found in the literature. These software tools
do not address the above research question directly: they are not aimed at covering the whole
body of gestural knowledge contained in the available GES and they are aimed supporting some
specific design questions other than finding the gestures for a given context of use.


3. Research Methodology
Since our research question is not fully addressed, our initial idea consists of compiling and
consolidating the results of all existing GES into a single gesture repository that can be queried
to address the research question. This repository becomes a gesture atlas where each gesture is
characterized according to several dimensions: user, task, device, environment, human limbs,
etc. For this purpose, our research method is structured in five connected stages (Fig. 1):


                                             SLR of GES                                             SLR of gesture              New GES
 Existing GES                                metadata                                               characteristics           and methods                 GEStory            Validation

                                                                                                                                                            Domain model   Querying GEStory by
                                                                                                                            New contexts of use                            - User type and characteristics
                            1,819 papers through                                       1,394 papers through                                                 Requirements
                                                                                                                            - Different users and tasks                    - Task type
                                database searching                                         database searching                                               Design
                                                                                                                            - Different devices and sensors                - Device, sensor type
                            + 430 papers through                                       + 422 papers through                                                 Development
                                                                                                                            - Different environments                       - Environment
                                other sources                                              other sources                    New methodological approaches                  - Human limb
                            - 311 duplicate papers                                     - 301 duplicate papers               - Credibility index                            - Combination
                            = 1,938 papers identified                                  = 1,515 papers identified            - Referentless elicitation
                                                                                                                            - Alternative measures
 29 2 p pap




                                                                                                                            - Modality transfer
   8 p ap er
   - 8 216
      =




                                                             27 pa pap
       ap ers s in




                                                               - 8 267
                                                               5 p pe er
         er ex clu




                                                                  =
           s e cl




                                                                  ap rs e s in




                                                                                                                            ed
                                                                                                                       een
              lig ude ded




                                                                    er xc clu




                                                                                                                               s
                                                                                                                    scr paper ning
                                                                      s e lu
                 ibl d




                                                                                                                ers     t      ee
                    e




                                                                         lig ded ded




                                                                                                             pap levan er scr
                                                                            ibl




                              1,938 papers screened                                                      1 5     e        t
                                                                                                      1,5 40 irr ers af
                                                                               e




                              -1,640 irrelevant papers
                                                                                                       -1,2 5 pap
                              = 298 papers after screening                                                   7
                                                                                                         =2




Figure 1: Research Methodology of GEStory.


   SLR of GES metadata: conduct a first SLR based on the metadata describing each GES [2],
such as the year of publication, venue, number of participants, number of referents, experimental
setup, number of proposed gestures and number of consensus gestures. Our approach was
inspired by the four-phase SLR method (Identification, Screening, Eligibility and Inclusion)
proposed by Liberati et al. in [19] and the flow was represented in a PRISMA diagram. For
identification, the query Q = (”Gesture” AND ”Elicitation” AND ”Study”) was performed on
five major Computer Science digital libraries (i.e.,ACM DL, IEEE Xplore, Elsevier ScienceDirect,
Elsevier Ei Compendex, and SpringerLink) and other sources (i.e.,DBLP CompleteSearch and
Google Scholar) to identify 2,249 candidates, from which 311 duplicates were eliminated. For
screening, we retained only those papers that explicitly introduced a GES for UI design, discussed
a GES, or explicitly used a method to examine a GES, thereby leaving 298 papers. For eligibility,
82 papers were excluded that did not match 3 conditions, leaving a final corpus of 𝑁=216 studies
for our examination. For inclusion, we verifyied quantitative and qualitative aspects of our
corpus of papers, stored and maintain as an on-line collection with Zotero, a multi-platform
bibliography management.
   SLR of gesture characteristics. Since the first SLR focused on GES metadata only, we ran a
second SLR to provide an in-depth analysis of gestures elicited and agreed upon in GES after
updating the collection, stopped at the beginning of 2021. We follow the same methodology
with the Q = (”Gesture” AND (guess* OR elicit*) AND (study OR experience)): 1,816 papers
were firstly identified, 1,515 papers were screened after duplicates removed, 275 papers became
eligible, and 267 papers were finally included. Based on these SLRs, we have obtained concepts
and terminologies that are part of the GES studies, we discussed some examples of representative
GES, and we provided data and calculations such as the average, the mean, maximum, and
minimum of the number of participants, references, collected gestures, final gestures, etc.
Moreover, consensus gestures are classified according to several dimensions: a taxonomy of
referents based on task classification [20], a classification of 3D gestures [21], a classification of
gestures in Augmented Reality [22] and another one for the whole body to control a humanoid
robot [23]. Bernsen’s theory of multimodality will be used to classify the modalities and
McAweeneyet al. [24] criteria will be expanded to classify gesture representations (e.g.,images,
animations, videos).
   New GES and methods. To complement our repository, our search conditions focus in
new contexts such as different users and tasks, different devices and sensors and different
environments, we identified some areas uncovered by existing GES and subsequently conducted
some of them, such as for head and shoulders gestures [9], for zenithal gestures [5], for radar-
based gestures [6, 7], for facial gestures [4] and Squeeze Gestures [8]. We are exploring new
methodological approaches, such as GES without any explicit referent to discover more proposed
gestures than with legacy bias [25] or by transformation [26].
   Development of GEStory [27]. Based on the results of the two aforementioned SLRs, a
domain model has been defined (Fig. 2) to create the database of GEStory, an on-line gesture
atlas for querying GES on multiple criteria. GEStory is presented as an interactive design space,
such as the one for wearable devices [28], where various design dimensions can be explored. In
particular, selecting any particular human limb should result in selecting GES satisfying this
criterium (see a prototype in Fig. 3). GEStory, beyond making gestures accessible, should also
provide some guidelines in selecting and designing gestures based on its atlas [29] and, possibly,
automate its evaluation based on guidelines [30].
   The framework used is Vue.js. The Vue file format is divided into 3 parts:
   1. The HTML structure of the page or element that you want to display,
   2. The JavaScript methods and state variables used by the component. This part also allows
      use the Listener design pattern to push changes to other components,
   3. The CSS part that corresponds to the style of the component.
GEStory has a total of 3902 gestures from 267 gesture elicitation studies (GESs) obtained in the
2 SRLs, the information of these gestures are public and available in the data.json file
   The architecture of the main page is divided into 3 parts (See Fig. 3) which communicate
with each other. These 3 components all inherit from the Vue.component class:

    • The bodyMap component which takes care, based on data extracted from the data.json
      file, of displaying points representing gestures on the body map. It has different attributes
      such as the list of body regions (bodyRegions). As for its methods, it mainly has methods
                                                                                       1,n
                            1                                     Referent
                                                          RefID                                      FunctionSubType
                   BodyPart                               RefName
                                                          RefRepresentation                          TaskID
                                                                                         1,n         TaskName
               BodyPartId                                 RefGeneric
               BodyPartName                                                                          TaskDesc
               BodyPartDesc            0, n
               Location
                                                                    1,n
                                                                                                                1,n
                                            0,n               Gesture
                                                                                                       FunctionType
                                                        GestureId
                                                        GestureName                                  TypeID
                     Device                             GestureRepresentation                        TaskSetName
                                                        GestureImage                                 TaskSetDesc
               DeviceID                       0,n       GestureType
               DeviceName                               GestureForm
               DeviceDesc                               GestureNature
                                                        GestureSymmetry
                                                        GestureLocale                    1,n
                                                        Agreement
                                                                                                        Environment
                                                                    1,n
                      User                                                                           EnvID
                                                                    0,n                        1,n   EnvName
               UserId                       1,n                                                      EnvDesc
                                                              GEStudy
               Number
                                                                                   1,n
               SDAge                                    StudyId
               MeansAge                                 StudyTitle
               MFRatio                                  Year
                                                        Published
                                                                                                           Process
                                                    0,n URL                                          ProsessID
                      Is replication of,                Authors                                      ProsessName
                    Is generalization of,               StrengthOfEvidence                           PreTest
                                                        Replication                                  Test
                     Is repurposing of                  Generalization             1           0,n   PosTest
                                                        Repurposing                                  Consolidation
                      Replication,                          0,n              1,n
                     Generalization,
                      Repurposing


Figure 2: UML diagram class of the domain model.


      related to the display of points on the body map (getPositionForTypeAndItem, drawLineR,
      changeBodySelection for example).
    • The DataFilter component takes care of displaying filters as well as displaying the list
      of gestures extracted from the same data.json file. Each gesture in the list has the name
      of the gesture, the name of the study and the credibility percentage. This component
      communicates to bodyMap the list of gestures to be displayed according to the filters. It
      has all the attributes related to the filters (those active, the total list of filters). As far as
      methods are concerned, these are mainly methods aimed at adapting its display according
      to the display of the other two components (if these are closed, you can increase the size
      of DataFilter, which is achieved with returnClass, mainClass).
    • The last component, ItemDisplay, takes care of displaying the gesture that has been
      selected on the bodyMap or in the list of gestures. It therefore depends on these 2
      components in order to obtain the gesture selected by the user. This element is used to
      display the advanced details of the chosen gesture. The main attribute of this element is
      the user selected gesture (item). Shows gesture information such as its name, name of
      origin GES, authors, URL of study, year of publication, credibility, etc. (see Fig. 3).

   To provide quantified, peer-reviewed gestures to inform the design of gesture-based user
interfaces, it is important that each stored gestures includes relevant information to become
effective. In some other references, recommended gestures could be based on the personal
Figure 3: GEStory screenshot.


opinions of a few experts, do not provide any reference to support them or any empirical
evidence to backup their application, do not provide any indication as to whether a particular
gesture represents a consensus of researchers or a large agreement among participants, do not
give any information about the relative importance of individual GES. To this end, a numerical
measure was proposed to quantify the credibility of consensus gestures offered by a GES. It has
reflected the essential criteria for a GES considering: (a) the length of the study (the number
of pages, e.g.,a poster of 4 pages is different from a paper of 25 pages), (b) the expertise of
the authors in gesture research (e.g.,, how many papers they published on topics related to
gestures, e.g.,7 for an author based on Google Scholar entries, (c) the venue where the GES
was published (for which we use Scimago’s journal ranking in terms of Q1/Q2/Q3/Q4/none
categories and CORE Rankings Portal for conferences in terms of A * / A / B / C / D), (d) the
number of participants involved in the study (e.g.,, a GES with 5 participants is assumed to have
a lower validity than a GES with >30 participants), and (e) their diversity in age, we use the
standard deviation of participants’ ages (e.g.,, SD ages = 5 when reported in the GES. It combines
(a), (b), (c), (d) and (e) in one single Strength of evidence measure, defined as follows:

                                     ( 𝐴𝑎 )2 + ( 𝐵𝑏 )2 + ( 𝐶𝑐 )2 + ( 𝐷𝑑 )2 + ( 𝐸𝑒 )2
                         𝑆𝐸(GES) =                                                             (1)
                                                           5
where:
    • A = the typical limit of page numbers for a full paper at the major HCI conferences (e.g.,10
      pages), so A=10. If A >= 10, then a/A is bounded to 1, so that each component of the sum
      above is between 0 and 1.
Figure 4: GEStory, Sankey diagram of the relationship between gestures and referents


    • B = the total number of articles published by all authors of the GES study, and b is the
      total number of gesture articles published by all authors of the GES study.
    • C = 5 and we encode Q1 = 5, Q2 = 4, Q3 = 3, Q4 = 2, and other = 1 (low strength of
      evidence based on the estimated quality of peer review).
    • D = 20 (the typical number of participants in GES studies; this value should result from
      the analysis of the appendix where the number of participants is discussed). If d >= 20,
      then d/D is limited to 1, so that each component of the sum above is between 0 and 1.
    • E = the standard deviation of participants’ ages extracted or computed, e.g.,, E=4.15.

   For example, Fig. 3 displays a list of gestures coming from different GES, ”Yes gesture” has a
strength of evidence of 𝑆𝐸=0.54, and Bend up bed down has a strength of evidence of 𝑆𝐸=0.6.
   Through my participation in the Doctoral Consortium at EICS22, we were recommended to
foster qualitative data, to propose a legacy classification of gesture names that are the same but
differ across studies. Inspired by this, we incorporate a Sankey diagram using the ”chartjs-chart-
sankey” library to show the different relationships that exist between gestures and referents
(See Fig. 4).
   Validation of GEStory. To validate GEStory we carry out tests with a group of twelve
volunteer participants (7 Males, 4 Females, 1 not specify, aged from 17 to 79 years). Recordings of
these interviews were made to allow calculation of success/failure rates per task and completion
of the UEQ+ questionnaire.
   In order to perform our tests, I have drawn up a list of 4 actions that the ”testers” are required
to perform. I decided on them based on the changes made to the platform. This allows me to
see how the user behaves in front of the different navigation tools (the selection menu, the
navigation bar, the “suggest a GES” button, etc), Below is the list of actions:
   1. Look for an iconic dynamic type gesture of 2018
   2. Submit new gesture data
   3. Look for the ”Move hand UP” from 2015, to be performed with the arm.
   4. Find the information containing the name of the tool on which the prototype was built
  Following the UEQ+ analysis, figure 5 shows that the two most important parameters for test
participants are dependability (2.42/3) and efficiency (2.33/3). Looking at the “ratings” assigned
by the testers to the organization of the GEStory platform, we can see in figure 6 that these
two parameters are among the top five rated (with ours of 0.63 and 0.58 respectively).

           3

           2

           1

               0.50          2.33          0.92    2.42       0.33              1.00           1.08   2.17       2.17
           0

          -1
                At



                             Ef



                                           Tr



                                                   De



                                                                 Ad



                                                                                Us



                                                                                               Va



                                                                                                      In



                                                                                                                     Qu
                               fic




                                                                                                        tu
                                              u
                   tr




                                                                                  ef



                                                                                                  l
                                                     pe



                                                                    ap




                                                                                                                        al
                                                                                                 ue
                                              st
                  ac




                                                                                                           i
                                   i




                                                                                    ul




                                                                                                         tiv



                                                                                                                           ity
                                    en




                                                        n



                                                                      ta
                    t iv




                                                                                       ne
                                                          da




                                                                                                             e
                                                                        bi
                                      cy




                                                                                                                           of
                        en




                                                                                          ss




                                                                                                               Us
                                                            bi




                                                                         tyli




                                                                                                                              C
                                                            lit
                        es




                                                                                                                 e



                                                                                                                                 on
                                                               y
                          s




                                                                                                                                   te
                                                                                                                                   nt
Figure 5: importance by parameters in UEQ+


   In Fig. 6, The best ranked parameter of the participants is that relating to the intuitive use of
the GEStory platform (1.04/3). This allows us to link this last criterion with the hypotheses we
have developed above. Indeed, we can quite easily say that we have succeeded in sufficiently
reducing the workload of the platform (which seemed very high to us on the initial version
of the project) so that the user can handle this tool without needing a very extensive training.
We can also highlight the fact that the accounting of the interface is quite suitable for it to
allow users to carry out their various tasks. This allows us to draw a parallel with the success
rate of the tasks we asked them to perform which is, for each task, at least, greater than 75%
success.The utility is a parameter of the UEQ+ analysis that had the most negative result (-0.98).
If we rely on the given definition, this parameter represents the fact that the use of the product
brings benefits to the user.
   Fig. 7 shows the result of task success/failure rate. In general, the participants were able to
perform the various tasks that were asked of them. The least successful task being number 1
(9 successes, 2 successes with our help, and 1 failure, see Fig. 7a). During our tests, we were
able to observe that the failure of the task was often explained by a phase, on the part of the
user, of ”taking control of the platform”. Most of the participants took the time to discover the
different search tools (the search engine and the search criteria system) and sometimes did not
understand how they work directly (despite a platform presentation phase). performed before
the various tests).
                2

                1
                     0.10      0.58         0.63         0.63                                 0.13         1.04         0.69

                -1                                                 -0.15         -0.98


                -2

                -3
                     At



                     Ef



                                             Tr



                                                         De



                                                                    Ad



                                                                                  Us



                                                                                                  Va



                                                                                                             In



                                                                                                                        Qu
                        fic




                                                                                                              tu
                                               us
                        tra




                                                                                    ef



                                                                                                     lu
                                                           pe



                                                                       ap




                                                                                                                           a
                                                                                                                iti



                                                                                                                           lit
                            ie



                                                 t




                                                                                                       e
                                                                                       u
                             ct




                                                             nd



                                                                          ta



                                                                                         ln
                              nc




                                                                                                                   ve



                                                                                                                              yo
                               iv




                                                                             b



                                                                                            es
                                                                ab
                                 y




                                                                           ili
                                  en




                                                                                                                   Us



                                                                                                                                fC
                                                                                              s
                                                                  ili



                                                                              ty
                                     es




                                                                                                                     e



                                                                                                                                  on
                                                                     ty
                                       s




                                                                                                                                     te
                                                                                                                                       nt
Figure 6: UEQ+ results


 Failure rate                                                              Failure rate

Success rate                                                               Success rate
 with help                                                                  with help

Success rate                                                               Success rate


                0%      20%           40%          60%          80%                         0%         20%        40%          60%     80%   100%


                            (a) Task 1                                                                       (b) Task 2

 Failure rate                                                              Failure rate

Success rate                                                               Success rate
 with help                                                                  with help

Success rate                                                               Success rate

                0%    20%      40%          60%      80%        100%                        0%         20%        40%          60%     80%   100%

                            (c) Task 3                                                                       (d) Task 4
Figure 7: Task success/failure rate


   By looking at the average resolution time of each task, we can also notice that task No. 1 is
the one that took the longest to complete (which could also be justified by this phase of ”getting
started with the tool”). The average resolution times being for task No. 1: 73 seconds, task No.
2: 27 seconds, task No. 3: 61 seconds, task No. 4: 24 seconds.
   These task resolution times also allow us to observe the fact that the tasks requiring the
use of the search tools present on the site (task No. 1 and No. 3) required longer resolution
times than the two others. As a reminder, the first task being: Look for an iconic dynamic type
gesture from 2018, and the third: Look for the ‘Move hand UP’ from 2015, to be performed with
the arm.
   Transition to GESistant. A transition should be ensured between GEStory and GESistant,
a software aimed at assisting the experimenter to conduct a GES in a way that is distributed in
time (stages are asynchronous) and space (participants are contributing remotely, self-assisted
and without any constraint) structured into 6 stages: define a study, conduct a study, classify
gestures, measure gestures, discuss gestures, and export gestures. When a query in GEStory
does not lead to a compelling set of appropriate gestures, the parameters of the query should be
transferred to the “define a study” stage in GESistant in order to match the suggested GES
criteria.


4. Conclusions
Based on the research methodology (Fig. 1), the first SLR has been completed and its results
are published [2]. The second SLR of gesture characteristics has been completed in terms of
research and are momentarily stored as on-line spreadsheets. Their results are under analysis.
Other GES have been conducted and published. Several others have been conducted, but not
yet analyzed, such as a GES for a haptic device vs.without it.
   The GEStory prototype was created based on its domain model (Fig. 2) whose classes
include Body part, Device, Gesture, Environment, Participant, Study, Referent Classification
(FunctionSubType and FunctionType). The relationships between device and gesture, body
part and gesture, gesture and referent, participant and study, study and gesture, gesture and
environment are also considered. Currently, GEStory [27] is considered an interactive design
space [28], whose classes are dimensions of investigation. Checking or unchecking the values of
each dimension will automatically result in the display of GES satisfying these criteria and their
consensus gestures (Fig. 3). Each gesture is displayed according to a textual representation, a
textual description, a picture, JSON-based formal definition based on an Extended Backus-Naur
Form (EBNF) grammar with transformations between [26]. By means of a Sankey diagram
(Fig. 4) the relationship between the classified gestures and the classified referents is shown;
this diagram shows the distribution of the preference of the users of a gesture to perform a
referent.
   A validation of the current version of GEStory was carried out in which 12 participants
evaluated ”Attractiveness”, ”Efficiency”, ”Trust”, ”Dependability”, ”Adaptability”, ”Usefulness”,
”Value”, ”Intuitive use” and ”Quality of content” with the UEQ+ questionnaire (Fig. 5 and Fig. 6)
and performed 4 tasks for which the time to perform and the success/failure rate were calculated
(Fig. 7).
   It is developing the transition from GEStory to GESistant, this will allow the user to export
the GEStory information (e.g.,GES result for replication, configuration parameters for a new
GES, etc.), This will allow experimenters to have a preload for a new GES. GESistant will allow
the study to be carried out remotely and asynchronously.


Acknowledgments
The author of this paper acknowledges the support of SIGCHI Gary Marsden Travel Awards
2022 for the support given for the trip and AFIHM for the invitation of Doctoral Consortium
EICS2022.
References
 [1] J. O. Wobbrock, M. R. Morris, A. D. Wilson, User-defined gestures for surface computing,
     in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
     CHI ’09, ACM, New York, NY, USA, 2009, pp. 1083–1092. URL: http://doi.acm.org/10.1145/
     1518701.1518866. doi:10.1145/1518701.1518866 .
 [2] S. Villarreal-Narvaez, J. Vanderdonckt, R.-D. Vatavu, J. O. Wobbrock, A systematic review
     of gesture elicitation studies: What can we learn from 216 studies?, in: Proceedings of the
     2020 ACM Designing Interactive Systems Conference, DIS ’20, Association for Computing
     Machinery, New York, NY, USA, 2020, pp. 855–872. URL: https://doi.org/10.1145/3357236.
     3395511. doi:10.1145/3357236.3395511 .
 [3] N. Magrofuoco, P. Roselli, J. Vanderdonckt, Two-dimensional stroke gesture recognition:
     A survey, ACM Comput. Surv. 54 (2021). URL: https://doi.org/10.1145/3465400. doi:10.
     1145/3465400 .
 [4] J. L. Pérez-Medina, S. Villarreal, J. Vanderdonckt, A gesture elicitation study of nose-based
     gestures, Sensors 20 (2020) 7118. URL: https://doi.org/10.3390/s20247118. doi:10.3390/
     s20247118 .
 [5] F. Martínez-Ruiz., S. Villarreal-Narvaez., Eliciting user-defined zenithal gestures for privacy
     preferences, in: Proceedings of the 16th International Joint Conference on Computer
     Vision, Imaging and Computer Graphics Theory and Applications - HUCAPP„ INSTICC,
     SciTePress, Vienna, 2021, pp. 205–213. doi:10.5220/0010259802050213 .
 [6] N. Magrofuoco, J. L. Pérez-Medina, P. Roselli, J. Vanderdonckt, S. Villarreal, Eliciting
     contact-based and contactless gestures with radar-based sensors, IEEE Access 7 (2019)
     176982–176997. URL: https://doi.org/10.1109/ACCESS.2019.2951349. doi:10.1109/ACCESS.
     2019.2951349 .
 [7] S. Villarreal-Narvaez, A.-I. Şiean, A. Sluÿters, R.-D. Vatavu, J. Vanderdonckt, Informing
     future gesture elicitation studies for interactive applications that use radar sensing, in:
     Proceedings of the 2022 International Conference on Advanced Visual Interfaces, AVI
     2022, Association for Computing Machinery, New York, NY, USA, 2022, pp. 1–3. URL:
     https://doi.org/10.1145/3531073.3534475. doi:10.1145/3531073.3534475 .
 [8] S. Villarreal-Narvaez, A. Siean, A. Sluÿters, J. Vanderdonckt, E. M. Luzayisu, Theoretically
     defined vs. user-defined squeeze gestures, in: ISS ’22: Interactive Surfaces and Spaces
     Conference, November 20–23, 2022, Wellington, New Zealand, ACM, 2022, p. 30. URL:
     https://doi.org/10.xxxx/xxxxxxx.xxxxxxx. doi:10.xxxx/xxxxxxx.xxxxxxx .
 [9] J. Vanderdonckt, N. Magrofuoco, S. Kieffer, J. Pérez, Y. Rase, P. Roselli, S. Villarreal, Head
     and shoulders gestures: Exploring user-defined gestures with upper body, in: A. Marcus,
     W. Wang (Eds.), Design, User Experience, and Usability. User Experience in Advanced
     Technological Environments, Springer International Publishing, Cham, 2019, pp. 192–213.
[10] V. Tijana, A. Duffy, L. Hay, C. McTeague, G. Campbell, M. Grealy, Systematic literature
     review of hand gestures used in human computer interaction interfaces, International Jour-
     nal of Human-Computer Studies 129 (2019) 74 – 94. URL: http://www.sciencedirect.com/
     science/article/pii/S1071581918305676. doi:https://doi.org/10.1016/j.ijhcs.2019.
     03.011 .
[11] P. Vogiatzidakis, P. Koutsabasis, Gesture Elicitation Studies for Mid-Air Interaction: A
     Review, Multimodal Technologies and Interaction 2 (2018) 65–. URL: https://www.mdpi.
     com/2414-4088/2/4/65. doi:https://doi.org/10.3390/mti2040065 .
[12] P. Koutsabasis, P. Vogiatzidakis, Empirical research in mid-air interaction: A system-
     atic review, International Journal of Human-Computer Interaction (2019) 1–22. URL:
     https://doi.org/10.1080/10447318.2019.1572352. doi:10.1080/10447318.2019.1572352 .
     arXiv:https://doi.org/10.1080/10447318.2019.1572352 .
[13] N. Magrofuoco, P. Roselli, J. Vanderdonckt, J. L. Pérez-Medina, R.-D. Vatavu, Gestman:
     A cloud-based tool for stroke-gesture datasets, in: Proceedings of the ACM SIGCHI
     Symposium on Engineering Interactive Computing Systems, EICS ’19, Association for
     Computing Machinery, New York, NY, USA, 2019, pp. 1–6. URL: https://doi.org/10.1145/
     3319499.3328227. doi:10.1145/3319499.3328227 .
[14] L. Anthony, R.-D. Vatavu, J. O. Wobbrock, Understanding the consistency of users’ pen
     and finger stroke gesture articulation, in: Proceedings of Graphics Interface 2013, GI ’13,
     Canadian Information Processing Society, CAN, 2013, p. 87–94.
[15] L. A. Leiva, R.-D. Vatavu, D. Martín-Albo, R. Plamondon, Omnis prædictio: Estimating
     the full spectrum of human performance with stroke gestures, International Journal of
     Human-Computer Studies 142 (2020) 102466. URL: https://www.sciencedirect.com/science/
     article/pii/S1071581920300689. doi:https://doi.org/10.1016/j.ijhcs.2020.102466 .
[16] H. Dang, D. Buschek, Gesturemap: Supporting visual analytics and quantitative analysis
     of motion elicitation data by learning 2d embeddings, in: Proceedings of the 2021 CHI
     Conference on Human Factors in Computing Systems, CHI ’21, Association for Computing
     Machinery, New York, NY, USA, 2021, pp. 1–12. URL: https://doi.org/10.1145/3411764.
     3445765. doi:10.1145/3411764.3445765 .
[17] R.-D. Vatavu, L.-B. Bilius, GestuRING: A Web-Based Tool for Designing Gesture Input
     with Rings, Ring-Like, and Ring-Ready Devices, Association for Computing Machinery,
     New York, NY, USA, 2021, p. 710–723. URL: https://doi.org/10.1145/3472749.3474780.
[18] B.-F. Gheran, J. Vanderdonckt, R.-D. Vatavu, Gestures for smart rings: Empirical results,
     insights, and design implications, in: Proceedings of the 2018 Designing Interactive Systems
     Conference, DIS ’18, Association for Computing Machinery, New York, NY, USA, 2018, p.
     623–635. URL: https://doi.org/10.1145/3196709.3196741. doi:10.1145/3196709.3196741 .
[19] A. Liberati, D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gøtzsche, J. P. A. Ioannidis, M. Clarke,
     P. J. Devereaux, J. Kleijnen, D. Moher, The prisma statement for reporting systematic
     reviews and meta-analyses of studies that evaluate health care interventions: explana-
     tion and elaboration, PLoS Medicine 6 (2009) 1–22. URL: https://www.ncbi.nlm.nih.gov/
     pubmed/19621070. doi:10.1371/journal.pmed.1000100 .
[20] D. R. Lenorovitz, M. D. Phillips, R. Ardrey, G. V. Kloster, A taxonomic approach to
     characterizing human-computer interaction., in: G. Salvendy (Ed.), Human-Computer
     Interaction., Elsevier Science Publishers, Amsterdam, 1984, pp. 111–116.
[21] R. Aigner, D. Wigdor, H. Benko, M. Haller, D. Lindbauer, A. Ion, S. Zhao, J. T.
     K. V. Koh, Understanding Mid-Air Hand Gestures: A Study of Human Preferences in
     Usage of Gesture Types for HCI, Technical Report MSR-TR-2012-111, Microsoft Re-
     search, 2012. URL: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/
     02/GesturesTR-20121107-RoA.pdf.
[22] T. Piumsomboon, A. Clark, M. Billinghurst, A. Cockburn, User-defined gestures for
     augmented reality, in: CHI ’13 Extended Abstracts on Human Factors in Computing
     Systems, CHI EA ’13, ACM, New York, NY, USA, 2013, pp. 955–960. URL: http://doi.acm.
     org/10.1145/2468356.2468527. doi:10.1145/2468356.2468527 .
[23] M. Obaid, M. Häring, F. Kistler, R. Bühling, E. André, User-defined body gestures for
     navigational control of a humanoid robot, in: S. S. Ge, O. Khatib, J.-J. Cabibihan, R. Simmons,
     M.-A. Williams (Eds.), Social Robotics, Lecture Notes in Computer Science, Springer,
     Berlin, Heidelberg, 2012, pp. 367–377. URL: https://doi.org/10.1007/978-3-642-34103-8_37.
     doi:10.1007/978- 3- 642- 34103- 8_37 .
[24] E. McAweeney, H. Zhang, M. Nebeling, User-driven design principles for gesture repre-
     sentations, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing
     Systems, CHI ’18, Association for Computing Machinery, New York, NY, USA, 2018, pp.
     1–13. URL: https://doi.org/10.1145/3173574.3174121. doi:10.1145/3173574.3174121 .
[25] M. R. Morris, A. Danielescu, S. Drucker, D. Fisher, B. Lee, m. c. schraefel, J. O. Wobbrock,
     Reducing legacy bias in gesture elicitation studies, Interactions 21 (2014) 40–45. URL:
     https://doi.org/10.1145/2591689. doi:10.1145/2591689 .
[26] N. Aquino, J. Vanderdonckt, O. Pastor, Transformation templates: Adding flexibility to
     model-driven engineering of user interfaces, in: Proceedings of the 2010 ACM Symposium
     on Applied Computing, SAC ’10, Association for Computing Machinery, New York, NY,
     USA, 2010, p. 1195–1202. URL: https://doi.org/10.1145/1774088.1774340. doi:10.1145/
     1774088.1774340 .
[27] B.-F. Gheran, S. Villarreal-Narvaez, R.-D. Vatavu, J. Vanderdonckt, Repliges and gestory:
     Visual tools for systematizing and consolidating knowledge on user-defined gestures, in:
     Proceedings of the 2022 International Conference on Advanced Visual Interfaces, AVI
     2022, Association for Computing Machinery, New York, NY, USA, 2022, pp. 1–9. URL:
     https://doi.org/10.1145/3531073.3531112. doi:10.1145/3531073.3531112 .
[28] F. Heller, K. Todi, K. Luyten, An Interactive Design Space for Wearable Displays, Association
     for Computing Machinery, New York, NY, USA, 2021, p. 14. URL: https://doi.org/10.1145/
     3447526.3472034.
[29] J. Vanderdonckt, Accessing guidelines information with sierra, in: Proc. of IFIP TC13 Int.
     Conf. on Human-Computer Interaction, INTERACT ’95, Chapman & Hall, London, 1995,
     pp. 311–316.
[30] A. Beirekdar, J. Vanderdonckt, M. Noirhomme-Fraiture, A Framework and a Language
     for Usability Automatic Evaluation of Web Sites by Static Analysis of HTML Source
     Code, Springer Netherlands, Dordrecht, 2002, pp. 337–348. URL: https://doi.org/10.1007/
     978-94-010-0421-3_29. doi:10.1007/978- 94- 010- 0421- 3_29 .