=Paper= {{Paper |id=Vol-3404/paper4 |storemode=property |title=Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes |pdfUrl=https://ceur-ws.org/Vol-3404/paper4.pdf |volume=Vol-3404 |authors=Arthur Sluÿters |dblpUrl=https://dblp.org/rec/conf/eics/Sluyters22 }} ==Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes== https://ceur-ws.org/Vol-3404/paper4.pdf
Mid-air Gesture Recognition by Ultra-Wide Band
Radar Echoes
Arthur Sluÿters1
1
 Louvain Research Institute in Management and Organizations, Université catholique de Louvain, Place des Doyens 1,
Louvain-la-Neuve, 1348, Belgium


                                         Abstract
                                         Microwave radar sensors in human-computer interaction promote several advantages over wearable
                                         and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and
                                         lighting conditions, and larger field of view. However, the raw signals produced by such radars are
                                         high-dimension and very complex to process and interpret for gesture recognition. For these reasons,
                                         machine learning techniques have been mainly used for gesture recognition, but require a significant
                                         amount of gesture templates for training and calibration that are specific for each radar. To address these
                                         challenges in the context of mid-air gesture interaction, we introduce a data processing pipeline for hand
                                         gesture recognition adopting a model-based approach that combines full-wave electromagnetic modelling
                                         and inversion. Thanks to this model, gesture recognition is reduced to handling two dimensions: the
                                         hand-radar distance and the relative dielectric permittivity, which depends on the hand only (e.g., size,
                                         surface, electric properties, orientation). We are developing a software environment that accommodates
                                         the significant stages of our pipeline towards final gesture recognition. We already tested it on a dataset
                                         of 16 gesture classes with 5 templates per class recorded with the Walabot, a lightweight, off-the-shelf
                                         array radar. We are now studying how user-defined radar gestures resulting from gesture elicitation
                                         studies could be properly recognized or not by our gesture recognition engine.

                                         Keywords
                                         Gesture-based interfaces, Mid-air gestural interaction, Radar-based interaction




1. Context of the Problem
Two/three-dimensional (2D/3D) gesture-based User Interfaces (UIs) [1] promise a natural and
intuitive interaction [2] as they rely on movements performed by the human body, which are
assumed to be more natural and easier to remember than artificially-determined commands.
Gesture-based UIs typically fall into three categories depending on the number of gesture
dimensions and the sensor used to capture and recognize the gesture:

                  • 2D (nearly-)touch-based. Gestures are performed on a sensitive surface, such as a trackpad,
                    a touchscreen, or a touchable surface (e.g., in Gambit [3]), which constrain the movement
                    to a series of 2D temporally-sequenced points (𝑥𝑖 , 𝑦𝑖 , 𝑡). Gestures performed on a spatial
                    object, such as a tangible object, remain in this category even if the object is spatially
                    moving. These gestures typically requires a contact-based sensor or a close-by sensor,

EICS ’22: Engineering Interactive Computing Systems conference, June 21–24, 2022, Sophia Antipolis, France
Envelope-Open arthur.sluyters@uclouvain.be (A. Sluÿters)
Orcid 0000-0003-0804-0106 (A. Sluÿters)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
     Gesture acquisition with radar            Radar
                                              gesture
                                              datasets

                                         1                       2                        3                                     4                   5

                                                                                    (a)       Antenna
                                             Raw data                FFT if time-                                               Background (b)
                                                                                               effects                                                     IFFT
                                              capture                domain radar                                               subtraction
                                                                                              removal


                                         9                       8                        7              Inversion                                  6
                                               Gesture     (d)                      (c)         Optimization                          Error
 Systematic Literature Review                recognition              Filtering                  algorithm                          function
                                                                                                                                                        Time gating
 of radar-based gestures and                 framework                                                    EM forward
    recognition techniques                                                                                  model


                                                                         Radar-based gesture recognition environment
  Gesture Elicitation Study with radar
           Gesture elicited from                                                                                       100
                                                                                                                                                 Target area




                                                                                                          Gesture recognition
           participant
                         Hand-radar
                         distance




                                                                                                          accuracy [%]
             Relative
             permittivity
         Referent: Turn TV on
                                                                                                                                0
                                                                                                                                    0 Low medium high           1
                                                 Walabot radar                Horn radar antenna                                    Gesture agreement rate [AR(r)]


Figure 1: Research methodology for radar-based interaction.

      when the gesture could stay close to the surface, but not touching, such as for a grazing
      gesture. Touch-based gestures can be augmented with a third dimension, like pressure,
      contact surface, or temperature.
    • 3D wearable. Gestures are captured by devices worn by the user, such as smartwatches,
      smart gloves, ring devices, or even smartphones. These devices often rely on accelerom-
      eters, gyroscopes, and magnetometers, usually combined into a single IMU (Internal
      Measuring Unit), to capture the orientation and acceleration of the device in 3D space.
      Other wearable devices capture motion by measuring the activity of muscles using elec-
      tromyography (EMG).
    • 3D non-wearable. Gestures are captured by non-wearable devices, such as vision-based
      sensors or radars. Vision-based sensors are quite popular, and include the Intel RealSense
      cameras, the Microsoft Kinect, the Ultraleap Leap Motion Controller (LMC), and even
      laptop cameras. They rely on computer-vision algorithms for motion tracking. Some of
      these sensors (e.g., the Microsoft Kinect) can capture full-body gestures and poses, while
      others may be limited to some body parts, such as the LMC, which focuses on arm and
      hand gestures.

   Existing vision-based sensors may suffer from problems inherent to image-based process-
ing [4]: sensitivity to ambient conditions, particularly lighting, limited field of view, transient
or permanent vision occlusion, and privacy concerns raised by a visible device observing the
end user. Similarly, wearable sensors are more invasive, as they must be worn by the user, and
may raise hygiene concerns, especially in critical environments such as hospitals. While radars
have their own set of issues (e.g., radar signals can be difficult to interpret), they do not suffer
from the same problems as vision-based and wearable sensors, thus making them a potential
alternative to these sensors for some applications in specific environments
   To address the aforementioned shortcomings and challenges, We wish to investigate if the
question of radar-based gesture interaction is feasible by: (1) exploring and determining the
space of 3D gestures that are acceptable both from an end-user and a system perspective, (2)
defining and testing a model-based approach transforming the raw signals from the radar into
meaningful information for gesture recognition, (3) developing a software environment for
engineering radar-based gesture interaction. To achieve this objective in a realistic and focused
way, we pose the following working hypotheses: we choose to work primarily with the Walabot
device, a radar that is widely commercially available for fixed or mobile interaction; we focus
on hand gestures, which is considered to be among the richest and most diverse methods of
interaction; we assume that the end user can add or remove gestures dynamically; we start with
simple control gestures.


2. Related Work
Radar-based sensing technologies [4] are now being considered as a relevant alternative to
other types of sensors for human-computer interaction. They have been successfully applied in
multiple domains such as virtual reality [5], activity recognition [6, 7], material recognition [8, 4],
and tangible interaction [9]. Prior works on gesture recognition using radar sensing, such
as [7, 10, 11, 12], typically rely on a fixed, custom-built radar.
   As far as we know, all existing techniques developed for radar-based gesture recognition rely
on machine/deep Learning algorithms [11, 12, 13] to cope with the high dimensionality and
the complexity of radar signals. For this reason, these recognizers mainly run in a stationary
environment, not a mobile one. However, the Walabot device, while being commercially
available and deployable in both stationary and mobile environments, introduces a new series
of constraints imposed by its limited amount of antennas, its small size, and its relatively narrow
bandwidth. The Google Soli radar chip [13], embedded in an Android smartphone, is able
to recognize 6 classes of radar-based gestures, thus making it mobile and available, but not
customizable: only the 6 gestures are recognized and cannot be changed.
   In principle, radar sensing techniques allow interaction without any wearable and visible
device since a radar can be operated below a surface such as a desk [6], behind a wall, and behind
different materials without significantly affecting the recognition [11] (e.g., wallpaper, cardboard,
and wood benefit from relatively low permittivity). Radars are also insensitive to weather and
lighting conditions [14]. RadarCat [4] recognizes physical objects and materials placed on top of
the sensor in real-time by extracting signals and classifying them using a random forest classifier.
They successfully tested 16 transparent materials with different properties such as thickness and
10 body parts, thereby demonstrating the real potential of permittivity. Yeo et al. [9] used a radar
in the context of tangible interaction for counting, ordering, and identifying objects involved
in a tangible setup, for tracking their orientation, movement, and between-object distance,
three variables that were originally captured by infrared. Beyond object and material detection
and classification, radars start to be widely used in several domains of application, such as
indoor human sensing with commodity radar [15], human activity recognition [6], human
position estimation [7], motion detection and classification [11]. Pantomime [11] mounts a
fixed foot-based radar with a high-frequency and continuous bandwidth and relies on deep
learning, i.e., LSTM and Pointnet++, to recognize 21 gestures acquired from 45 participants in
terms of 3D point clouds. Wang et al. [16] require only two antennas in their radar to recognize
2D stroke gestures: their low-dimensionality, as opposed to 3D gestures, does not require more
antennas. Short-range radar-based gestures could also be recognized using 3D convolutional
neural networks [17].


3. Research Methodology
Our research methodology consists of performing the following stages (Fig. 1):
   SLR of radar-based gestures and recognition techniques (Fig. 2). Radar-based gesture
recognition is a hot topic that attracts a lot of very recent works. Although techniques for
radar-based gesture recognition are reviewed in general [18] and for hand gestures in particu-
lar [19], we believe that there is still a need to conduct a Systematic Literature Review (SLR) to
systematically determine not only the techniques (e.g., to show that only machine/deep learning
techniques are primarily used as opposed to our model-based approach) but also the gestures
covered (e.g., to provide a compilation of recognized gestures) and the radar systems used for
capturing gestures. Our approach is inspired by the four-phase SLR method (Identification,
Screening, Eligibility and Inclusion) proposed by Liberati et al. [20] and the flow is represented
in a PRISMA diagram.

    Identification               Screening                   Eligibility           Inclusion
 1515 papers identified      1455 papers
                                                        207 papers assessed
 through database            screened after                                   118 papers included
                                                        for eligibility
 searching (ACM DL,          duplicates removed
                                                                                       Qualitative
 IEEE Xplore, MDPI
                                                                                       analysis
 Sensors, SpringerLink,
                                                                                       (classification)
 Elsevier ScienceDirect)
                                                                                       Quantitative
            60 duplicate              1248 irrelevant            89 papers             analysis
            papers removed            papers removed             excluded              (Zotero)


Figure 2: PRISMA diagram [21] of our systematic literature review.


   Gesture elicitation study with radar (bottom left of Fig. 1). To consolidate the gestures
identified in the SLR, a gesture elicitation study [22] is conducted to explore user-defined
gestures relevant to our context of use (i.e., a Walabot transforming hand gestures into control
commands in a IoT environment). The output of this study consists of computing an agreement
rate among participants for the proposed gestures. We also envision to conduct a similar
study for gestures where only a radar could operate: in dark conditions, behind a door, or in
a reflected setup. Conducting new gesture elicitation studies with radar sensors is desirable
as user gestures may not always be transferable across different devices. For instance, users
may propose different gestures for the same action performed using a radar, a vision-based, or
a wearable sensor.
   Gesture acquisition with radar (top left of Fig. 1, Fig. 3, Fig. 4). When participants
propose gestures in an elicitation study, they behave naturally in their proposal, but they do
not necessarily focus on the gestures themselves, thus raising the need to acquire gesture
datasets corresponding to the most agreed upon gestures according to a rigorous procedure.
The conjunction of the results coming from this activity, along with the SLR and the gesture
elicitation study, leads us to create a series of gesture datasets for radars that will be made
available to the scientific community. The datasets could be recorded in various environments
(e.g., in an office, behind a door) and with different types of sensors, including vision-based
sensors such as the LMC for comparison purposes.




       (a) Open hand            (b) Close hand       (c) Open, then close hand    (d) Swipe right




        (e) Swipe left             (f) Swipe up          (g) Swipe down          (h) Push with fist



                                   3
                               1
                                            2


     (i) Push with palm          (j) Wave hand              (k) Infinity         (l) Barrier gesture




   (m) Extend one finger    (n) Extend two fingers (o) Extend three fingers (p) Extend four fingers
Figure 3: The sixteen gesture types used in our first dataset.


   Radar-based gesture recognition environment (top right of Fig. 1). We are developing
a software environment for ensuring the accurate recognition of gestures belonging to the
datasets of the previous stage and for engineering a radar-based gestural user interface for an
interactive software. This environment is divided into two main parts: (1) a pipeline for radar
data pre-processing that removes noise and clutter from the signal and performs dimension
             2x




      (a) Knock twice.         (b) Draw circle.           (c) Draw z.           (d) Touch nose.
Figure 4: The four additional gestures types used in our extended dataset.

reduction for use with simple template-matching gesture recognition algorithms, and (2) a
framework for integrating gesture recognition into an application. In this way, the developer of
such an interactive software will see the process of integrating radar-based interaction facilitated
by its recognition and its mapping of gestures to commands.
   The bottom right part of Fig. 1 depicts the range of radar-based gestures in terms of two
dimensions: the 𝑋-axis structures gestures according to their agreement rate obtained in the
gesture elicitation study and the 𝑌-axis structures gestures according to their recognition rate
obtained from our recognition environment. Some user-defined gestures may receive a high
agreement rate among participants, but be hardly recognized: to preserve the experimental
conditions, participants are untold about the restrictions imposed by the technology and the
recognition environment. Conversely, some system-defined gestures may benefit from a high
accuracy resulting from our environment, but receive a low agreement rate among participants.
Therefore, the target area, depicted in dark green on the top right portion of the space, is the
most desirable area where gestures receive both a high agreement rate and a high recognition
accuracy.


4. Current status
Based on the research methodology (Fig. 1), an SLR has been completed (Fig. 2), in which we
identified 118 relevant papers from a set of 1515 references. Its results are now available, but
not yet published. A new gesture elicitation study has been conducted with the Walabot device
and its results are being analyzed. Two gesture datasets have already been created: one with 16
gesture classes having 5 templates each (Fig. 3) and another one with 20 gesture classes having
50 templates each (Fig. 3 and 4). Based on radar electromagnetic modeling and inversion [23, 24],
a very first version of the radar signal processing pipeline has been developed and tested on
the first dataset, already delivering encouraging results [25]. We developed QuantumLeap, a
framework for developing gesture-based applications that has been applied to the Leap Motion
Controller and evaluated with seven developers [26]. LUI, a gesture-based application for
manipulating multimedia content, has been developed with the QuantumLeap framework and
the LMC [27]. In the future, both QuantumLeap and the LUI application should be adapted to
support radar-based gestures.
5. Challenges
In this section, we discuss the main challenges of this thesis that have been identified so far:
   Gesture recognition accuracy. The system should be able to recognize gestures with
sufficient accuracy to not hinder user interaction with the system. Inaccurate gesture recognition
could create a lot of frustration for users, as it would regularly force them to perform the same
gesture twice, and could cause the system to incorrectly react to user gestures. Reaching >90%
accuracy, especially between different users and in various contexts, will be a major challenge
to solve.
   Online radar-based gesture recognition. The performance of the gesture recognition
environment should be high enough to enable real-time execution. Some stages of the current
radar signal processing pipeline, such as the “inversion” stage, should be optimized to meet
this objective. In addition, appropriate gesture segmentation techniques will be required to
accurately identify gestures performed by the user from the continuous stream of radar data.




Figure 5: Variation of hand size across humans.
   Data normalization across users. Physical differences between two persons, such as their
hand size or arm length (Fig. 5), may prevent a system trained with data from one user to
accurately recognize gestures performed by another user. For instance, for the same gesture,
the amplitude of the reflected radar signal varies depending on hand size (a larger hand reflects
more signal). Similarly, users’ arm length and body size will impact the reflected signal and
thus could result in lower accuracy between users. One solution would be to integrate a data
normalization stage into the radar signal processing pipeline that would remove the effect of
users’ physical characteristics on the signal after a small calibration step.
   Two-handed and multi-user interaction. In its current state, the radar-based gesture
recognition environment only supports one-handed gestures performed by a single user. Sup-
porting multiple users and/or two-handed gestures will require major changes to the radar
signal processing pipeline, in particular the inversion stage.
   Privacy concerns. In contrast to vision-based sensors, radars such as the Walabot do not
capture clear images of the users. However, they possess other features that may raise privacy
concerns, such as their ability to see through some materials, and thus to identify user motion
while being completely hidden, or their ability to function in dark environments. Further
research is required to identify the privacy concerns of radars and to evaluate how they are
perceived by end-users.
6. Expected contributions
The expected contributions of the doctoral thesis are as follows:
   1. One SLR related to radar-based gestural interaction highlighting the existing techniques
      for gesture recognition, the radar systems used, and a classification of all gestures involved
      in these works to initiate an inventory of radar-based gestures.
   2. Various gesture elicitation studies in the context of radar-based gestural interaction. The
      studies will cover different different types of radars, including Google Soli [28] and the
      Walabot. One study will focus on controlling IoT devices with a Walabot, while other
      studies will focus on radar gestures with the Walabot in extreme conditions (e.g., in a
      dark environment).
   3. A consolidation of all radar-based gestures found in the literature and resulting from the
      above studies into a repository in order to acquire radar gesture sets in original format or
      by transformation [29].
   4. A radar-based gesture recognition environment that ensures both the recognition process
      and the engineering process for integrating radar-based gestures into a gestural interface
      of an interactive software.
   5. A validation of this environment on selected use cases answering the initial research
      question.


Acknowledgments
The author of this paper acknowledges the support of the MIT-Belgium MISTI Program under
grant COUHES n°1902675706 and of the “Fonds de la Recherche Scientifique - FNRS” under
Grant n° 40001931.


References
 [1] J. J. LaViola, 3d gestural interaction: The state of the field, International Scholarly
     Research Notices 2013 (2013). URL: https://www.hindawi.com/journals/isrn/2013/514641/.
     doi:1 0 . 1 1 5 5 / 2 0 1 3 / 5 1 4 6 4 1 .
 [2] J. Huang, P. Jaiswal, R. Rai, Gesture-based system for next generation natural and intuitive
     interfaces, Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33
     (2019) 54–68. doi:1 0 . 1 0 1 7 / S 0 8 9 0 0 6 0 4 1 8 0 0 0 0 4 5 .
 [3] U. B. Sangiorgi, F. Beuvens, J. Vanderdonckt, User interface design by collaborative
     sketching, in: Proc. of ACM Int. Conf. on Designing Interactive Systems Conference, DIS
     ’12, ACM, 2012, pp. 378–387. URL: https://doi.org/10.1145/2317956.2318013. doi:1 0 . 1 1 4 5 /
     2317956.2318013.
 [4] H.-S. Yeo, G. Flamich, P. Schrempf, D. Harris-Birtill, A. Quigley, Radarcat: Radar cate-
     gorization for input & interaction, in: Proceedings of the 29th Annual Symposium on
     User Interface Software and Technology, UIST ’16, Association for Computing Machin-
     ery, New York, NY, USA, 2016, p. 833–841. URL: https://doi.org/10.1145/2984511.2984515.
     doi:1 0 . 1 1 4 5 / 2 9 8 4 5 1 1 . 2 9 8 4 5 1 5 .
 [5] C. Huesser, S. Schubiger, A. Çöltekin, Gesture interaction in virtual reality, in: C. Ardito,
     R. Lanzilotti, A. Malizia, H. Petrie, A. Piccinno, G. Desolda, K. Inkpen (Eds.), Human-
     Computer Interaction – INTERACT 2021, Springer International Publishing, Cham, 2021,
     pp. 151–160.
 [6] D. Avrahami, M. Patel, Y. Yamaura, S. Kratz, Below the surface: Unobtrusive activity
     recognition for work surfaces using rf-radar sensing, in: 23rd International Conference on
     Intelligent User Interfaces, IUI ’18, Association for Computing Machinery, New York, NY,
     USA, 2018, p. 439–451. URL: https://doi.org/10.1145/3172944.3172962. doi:1 0 . 1 1 4 5 / 3 1 7 2 9 4 4 .
     3172962.
 [7] D. Avrahami, M. Patel, Y. Yamaura, S. Kratz, M. Cooper, Unobtrusive activity recognition
     and position estimation for work surfaces using rf-radar sensing, ACM Trans. Interact.
     Intell. Syst. 10 (2019). URL: https://doi.org/10.1145/3241383. doi:1 0 . 1 1 4 5 / 3 2 4 1 3 8 3 .
 [8] Z. Flintoff, B. Johnston, M. Liarokapis, Single-grasp, model-free object classification using
     a hyper-adaptive hand, google soli, and tactile sensors, in: 2018 IEEE/RSJ International
     Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1943–1950. doi:1 0 . 1 1 0 9 /
     IROS.2018.8594166.
 [9] H.-S. Yeo, R. Minami, K. Rodriguez, G. Shaker, A. Quigley, Exploring tangible interactions
     with radar sensing, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2 (2018). URL:
     https://doi.org/10.1145/3287078. doi:1 0 . 1 1 4 5 / 3 2 8 7 0 7 8 .
[10] N. T. Attygalle, L. A. Leiva, M. Kljun, C. Sandor, A. Plopski, H. Kato, K. C. Pucihar, No
     interface, no problem: Gesture recognition on physical objects using radar sensing, Sensors
     21 (2021) 5771. URL: https://doi.org/10.3390/s21175771. doi:1 0 . 3 3 9 0 / s 2 1 1 7 5 7 7 1 .
[11] S. Palipana, D. Salami, L. A. Leiva, S. Sigg, Pantomime: Mid-Air Gesture Recognition
     with Sparse Millimeter-Wave Radar Point Clouds, Proceedings of the ACM on Interactive,
     Mobile, Wearable and Ubiquitous Technologies 5 (2021) 27:1–27:27. URL: https://doi.org/
     10.1145/3448110. doi:1 0 . 1 1 4 5 / 3 4 4 8 1 1 0 .
[12] E. Hayashi, J. Lien, N. Gillian, L. Giusti, D. Weber, J. Yamanaka, L. Bedal, I. Poupyrev,
     Radarnet: Efficient gesture recognition technique utilizing a miniature radar sensor, in:
     Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI
     ’21, Association for Computing Machinery, New York, NY, USA, 2021. URL: https://doi.
     org/10.1145/3411764.3445367. doi:1 0 . 1 1 4 5 / 3 4 1 1 7 6 4 . 3 4 4 5 3 6 7 .
[13] J. Lien, N. Gillian, M. E. Karagozler, P. Amihood, C. Schwesig, E. Olson, H. Raja, I. Poupyrev,
     Soli: Ubiquitous gesture sensing with millimeter wave radar, ACM Trans. Graph. 35 (2016).
     URL: https://doi.org/10.1145/2897824.2925953. doi:1 0 . 1 1 4 5 / 2 8 9 7 8 2 4 . 2 9 2 5 9 5 3 .
[14] H.-S. Yeo, A. Quigley, Radar sensing in human-computer interaction, Interactions 25
     (2017) 70–73. URL: https://doi.org/10.1145/3159651. doi:1 0 . 1 1 4 5 / 3 1 5 9 6 5 1 .
[15] M. Alloulah, A. Isopoussu, F. Kawsar, On indoor human sensing using commodity radar,
     in: Proceedings of the 2018 ACM International Joint Conference and 2018 International
     Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, UbiComp
     ’18, Association for Computing Machinery, New York, NY, USA, 2018, p. 1331–1336. URL:
     https://doi.org/10.1145/3267305.3274180. doi:1 0 . 1 1 4 5 / 3 2 6 7 3 0 5 . 3 2 7 4 1 8 0 .
[16] P. Wang, J. Lin, F. Wang, J. Xiu, Y. Lin, N. Yan, H. Xu, A gesture air-writing tracking
     method that uses 24 ghz simo radar soc, IEEE Access 8 (2020) 152728–152741. doi:1 0 . 1 1 0 9 /
     ACCESS.2020.3017869.
[17] S. Hazra, A. Santra, Short-range radar-based gesture recognition system using 3d cnn
     with triplet loss, IEEE Access 7 (2019) 125623–125633. doi:1 0 . 1 1 0 9 / A C C E S S . 2 0 1 9 . 2 9 3 8 7 2 5 .
[18] Y. Dong, W. Qu, Review of research on gesture recognition based on radar technol-
     ogy, in: S. Shi, L. Ye, Y. Zhang (Eds.), Artificial Intelligence for Communications and
     Networks, Lecture Notes of the Institute for Computer Sciences, Social Informatics
     and Telecommunications Engineering, Springer International Publishing, Berlin, 2021.
     doi:h t t p s : / / d o i . o r g / 1 0 . 1 0 0 7 / 9 7 8 - 3 - 0 3 0 - 6 9 0 6 6 - 3 _ 3 4 .
[19] S. Ahmed, K. D. Kallu, S. Ahmed, S. H. Cho, Hand gestures recognition using radar
     sensors for human-computer-interaction: A review, Remote Sensing 13 (2021). URL:
     https://www.mdpi.com/2072-4292/13/3/527. doi:1 0 . 3 3 9 0 / r s 1 3 0 3 0 5 2 7 .
[20] A. Liberati, D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gø tzsche, J. P. Ioannidis, M. Clarke, P. J.
     Devereaux, J. Kleijnen, D. Moher, The PRISMA statement for reporting systematic reviews
     and meta-analyses of studies that evaluate health care interventions: explanation and
     elaboration, PLoS Medicine 6 (2009) 1–22. URL: https://www.ncbi.nlm.nih.gov/pubmed/
     19621070. doi:1 0 . 1 3 7 1 / j o u r n a l . p m e d . 1 0 0 0 1 0 0 .
[21] M. J. Page, D. Moher, P. M. Bossuyt, I. Boutron, T. C. Hoffmann, C. D. Mulrow, L. Shamseer,
     J. M. Tetzlaff, E. A. Akl, S. E. Brennan, R. Chou, J. Glanville, J. M. Grimshaw, A. Hróbjartsson,
     M. M. Lalu, T. Li, E. W. Loder, E. Mayo-Wilson, S. McDonald, L. A. McGuinness, L. A.
     Stewart, J. Thomas, A. C. Tricco, V. A. Welch, P. Whiting, J. E. McKenzie, Prisma 2020
     explanation and elaboration: updated guidance and exemplars for reporting systematic
     reviews, BMJ 372 (2021). URL: https://www.bmj.com/content/372/bmj.n160. doi:1 0 . 1 1 3 6 /
     bmj.n160. arXiv:https://www.bmj.com/content/372/bmj.n160.full.pdf.
[22] J. O. Wobbrock, M. R. Morris, A. D. Wilson, User-defined gestures for surface computing,
     CHI ’09, ACM, 2009, pp. 1083–1092. URL: http://doi.acm.org/10.1145/1518701.1518866.
     doi:1 0 . 1 1 4 5 / 1 5 1 8 7 0 1 . 1 5 1 8 8 6 6 , ZSCC: 0001254.
[23] S. Lambot, E. Slob, I. van den Bosch, B. Stockbroeckx, M. Vanclooster, Modeling of ground-
     penetrating radar for accurate characterization of subsurface electric properties, IEEE
     Transactions on Geoscience and Remote Sensing 42 (2004) 2555–2568. doi:1 0 . 1 1 0 9 / T G R S .
     2004.834800.
[24] S. Lambot, F. André, Full-wave modeling of near-field radar data for planar layered media
     reconstruction, IEEE Transactions on Geoscience and Remote Sensing 52 (2014) 2295–2303.
     doi:1 0 . 1 1 0 9 / T G R S . 2 0 1 3 . 2 2 5 9 2 4 3 .
[25] A. Sluÿters, S. Lambot, J. Vanderdonckt, Hand gesture recognition for an off-the-shelf
     radar by electromagnetic modeling and inversion, in: Proc. of 27th ACM International
     Conference on Intelligent User Interfaces, IUI ’22, Association for Computing Machinery,
     New York, NY, USA, 2022, p. 1–17. URL: https://doi.org/10.1145/3490099.3511107. doi:1 0 .
     1145/3490099.3511107.
[26] A. Sluÿters, M. Ousmer, P. Roselli, J. Vanderdonckt, Quantumleap, a framework for
     engineering gestural user interfaces based on the leap motion controller, Proc. ACM Hum.
     Comput. Interact. 6 (2022) 1–47. URL: https://doi.org/10.1145/3457147. doi:1 0 . 1 1 4 5 / 3 4 5 7 1 4 7 .
[27] A. Sluÿters, Q. Sellier, J. Vanderdonckt, V. Parthiban, P. Maes, Consistent, continuous,
     and customizable mid-air gesture interaction for browsing multimedia objects on large
     displays, International Journal of Human-Computer Interaction 38 (2022). doi:1 0 . 1 0 8 0 /
     10447318.2022.2078464.
[28] N. Magrofuoco, J. L. Pérez-Medina, P. Roselli, J. Vanderdonckt, S. Villarreal, Eliciting
     contact-based and contactless gestures with radar-based sensors, IEEE Access 7 (2019)
     176982–176997. URL: https://doi.org/10.1109/ACCESS.2019.2951349. doi:1 0 . 1 1 0 9 / A C C E S S .
     2019.2951349.
[29] N. Aquino, J. Vanderdonckt, O. Pastor, Transformation templates: Adding flexibility
     to model-driven engineering of user interfaces, in: Proceedings of the 2010 ACM
     Symposium on Applied Computing, SAC ’10, Association for Computing Machinery,
     New York, NY, USA, 2010, p. 1195–1202. URL: https://doi.org/10.1145/1774088.1774340.
     doi:1 0 . 1 1 4 5 / 1 7 7 4 0 8 8 . 1 7 7 4 3 4 0 .