<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar-Based Hand Gesture Recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sarah Seifi</string-name>
          <email>sarah.seifi@tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tobias Sukianto</string-name>
          <email>tobias.sukianto@infineon.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cecilia Carbonelli</string-name>
          <email>cecilia.carbonelli@infineon.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Servadei</string-name>
          <email>lorenzo.servadei@tum.de</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robert Wille</string-name>
          <email>robert.wille@tum.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Infineon Technologies AG</institution>
          ,
          <addr-line>Am Campeon 1-15, 85579 Neubiberg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Johannes Kepler University Linz</institution>
          ,
          <addr-line>Altenbergerstraße 69, 4040 Linz</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Software Competence Center Hagenberg GmbH (SCCH)</institution>
          ,
          <addr-line>Softwarepark 32a, 4232 Hagenberg</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Technical University of Munich</institution>
          ,
          <addr-line>Arcisstraße 21, 80333 Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Rule-based models ofer interpretability but struggle with complex data, while deep neural networks excel in performance yet lack transparency. This work investigates RL-Net, a neuro-symbolic architecture that learns interpretable rule lists through neural optimization, applied for the first time to radar-based hand gesture recognition (HGR). We benchmark RL-Net against a fully transparent rule-based system (MIRA) and an explainable black-box model (XentricAI), evaluating accuracy, interpretability, and user adaptability via transfer learning. Our results show that RL-Net achieves a favorable trade-of, maintaining strong performance ( 93% F1) while significantly reducing rule complexity. We identify optimization challenges specific to rule pruning and hierarchy bias and propose stability-enhancing modifications. Compared to MIRA and XentricAI, RL-Net emerges as a practical middle ground between transparency and performance. This study highlights the real-world feasibility of neuro-symbolic models for interpretable HGR and ofers insights for extending explainable AI to edge-deployable sensing systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Interpretable Classification</kwd>
        <kwd>Rule-Based Learning</kwd>
        <kwd>Neuro-Symbolic AI</kwd>
        <kwd>FMCW Radar</kwd>
        <kwd>Hand Gesture Recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Hand gestures ofer a natural modality for human–computer interaction, with applications in automotive
safety, healthcare, and augmented reality [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. Radar-based hand gesture recognition (HGR) is
particularly attractive due to the radar’s robustness, privacy-preserving properties, and compact form
factor [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Despite recent success of deep learning for radar-based HGR [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], these models are often opaque and
lack interpretability, posing challenges in safety-critical and regulated environments, particularly under
emerging legislation like the European Union’s Artificial Intelligence Act 1.
      </p>
      <p>
        Fully transparent, rule-based models provide interpretability but fail to generalize in complex
highdimensional settings. Neuro-symbolic artificial intelligence (AI) ofers a promising middle ground
by integrating symbolic logic with neural architectures [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], enabling models that learn structured,
interpretable rules through gradient-based optimization.
      </p>
      <p>
        In this work, we explore the RL-Net architecture [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a neuro-symbolic model that learns ordered rule
lists for classification. We apply it to radar-based HGR and evaluate its trade-ofs between accuracy and
interpretability. To our knowledge, this is the first practical application of neuro-symbolic learning to
real-world Frequency-Modulated Continuous Wave (FMCW) radar data, with a focus on interpretability
and user adaptation.
      </p>
      <p>Our main contributions are:
1. First application of RL-Net to radar-based HGR: We demonstrate the first real-world
deployment of a neuro-symbolic model on FMCW radar gesture data.
2. Training stabilization and architectural improvements: We enhance RL-Net with batch
normalization and validation-time regularization to improve robustness and reduce rule complexity.
3. Personalized transfer learning: We apply user-specific fine-tuning to RL-Net, improving
accuracy while simplifying learned rule sets.
4. Comparative evaluation: We benchmark RL-Net against MIRA (white-box) and XentricAI
(black-box), assessing accuracy, interpretability, and adaptability.
5. Limitations and future work: We identify structural training issues in RL-Net’s fixed rule
hierarchy and outline lightweight improvements to support scalable, interpretable deployment.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Interpretable White-box Models</title>
        <p>
          Transparent models like decision trees, logistic regression, and rule-based systems are essential for
high-stakes applications due to their interpretability [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Rule-based models, structured as "if-then"
statements, align with human reasoning and are typically organized as rule sets or hierarchical rule
lists [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>A rule comprises multiple feature-based conditions leading to a prediction, e.g., "IF (x1 &gt; 0.5 AND x2
≤ 3.0) THEN class = 1". This clarity supports traceability and user trust.</p>
        <p>
          MIRA [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] exemplifies a transparent rule-based classifier designed for multi-class HGR. It
incorporates foundational and user-personalized rules but sufers from limited scalability and overfitting as
dimensionality increases.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Explainable Black-box Models</title>
        <p>
          Deep Neural Networks (NN) (convolutional NNs [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], recurrent NNs [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], Transformers [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]) yield
strong radar HGR performance but lack interpretability. XAI techniques like SHAP [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] ofer post-hoc
feature attributions but are faithful to the model, not the underlying data [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. They fail to reveal
intermediate reasoning steps.
        </p>
        <p>
          Recent work such as XentricAI [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] applies SHAP to recurrent NNs for gesture explanation and
anomaly feedback, combining gesture detection and classification per frame. While insightful, it remains
a black-box at its core.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Neuro-Symbolic AI (Gray-box Models)</title>
        <p>
          Neuro-symbolic AI bridges rule-based interpretability and neural flexibility [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. These models generate
readable rules through trainable NNs, producing interpretable outputs from opaque training, often
termed "gray-box" systems [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          DR-Net [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] models rules using a two-layer NN: an AND-based rules layer followed by an OR layer.
Rules emerge from binary neuron activations with sparsity-based regularization to control complexity.
        </p>
        <p>
          RL-Net [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] extends DR-Net with fixed-order rule hierarchy, producing interpretable rule lists suitable
for edge deployment. More advanced systems like HyperLogic [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] improve scalability via
hypernetworks but at the cost of interpretability and complexity, making them less ideal for transparent and
on-the-edge applications.
        </p>
        <p>Figure 1 summarizes model trade-ofs between interpretability and performance, positioning RL-Net
between black-box deep models and fully symbolic approaches.</p>
        <p>y
t
i
l
i
b
a
t
e
r
p
r
e
t
n
I
l
e
d
o
M</p>
        <sec id="sec-2-3-1">
          <title>Decision Tree ...</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>Rule-Based</title>
        </sec>
        <sec id="sec-2-3-3">
          <title>Models</title>
        </sec>
        <sec id="sec-2-3-4">
          <title>Logistic</title>
        </sec>
        <sec id="sec-2-3-5">
          <title>Regression</title>
          <p>White-Box Model</p>
          <p>DR-Net
RL-Net
...</p>
        </sec>
        <sec id="sec-2-3-6">
          <title>HyperLogic</title>
          <p>Gray-Box
Model</p>
          <p>RNN
CNN
...</p>
        </sec>
        <sec id="sec-2-3-7">
          <title>Transformer</title>
          <p>Ideal
Black-Box Model
Model Performance</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Motivation for Our Work</title>
        <p>To date, neuro-symbolic models have not been applied to real-world FMCW radar gesture data. Their
adaptability via Transfer learning (TL) also remains underexplored. This work fills that gap by evaluating
RL-Net in comparison to MIRA and XentricAI, analyzing the performance–interpretability trade-of
and exploring its potential for user-specific gesture adaptation.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. RL-Net Overview</title>
        <p>
          We apply RL-Net [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] to FMCW radar gesture data. The pipeline involves signal preprocessing, feature
extraction, model training, and interpretable rule generation, as illustrated in Figure 2.
        </p>
        <p>Raw Data Collection</p>
        <p>Signal Prepocessing
chirps
samples</p>
        <p>Feature Extraction NeuTraraliNneintgwork</p>
        <p>Binarized Rule Layer Hierarchy
Features Layer</p>
        <p>A
B ReLu
C ReLu
D ReLu</p>
        <p>Rule-Based</p>
        <p>Prediction
Output
Layer</p>
        <p>IF A and C and not D THEN 1
0
1 ELSE IF not B and not C THEN 2
2</p>
        <p>ELSE 0</p>
        <p>
          Input &amp; Rule Layers. The input layer receives binarized features  ∈ {0, 1}. Each rule neuron
functions as a logical AND, with ternary weights W encoding positive, negative, or excluded feature
use. To enable diferentiable training, these weights are reparameterized as W = W ∘ W , where
W ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] are approximated binary masks following [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>A neuron activates only when all conditions are satisfied:
 = ∑︁  −

∑︁  + 1
&gt;0
where  ∈ W and the step function () = 1 if  = 1, else 0, binarizes the output.</p>
        <p>To improve training stability, we add batch normalization.</p>
        <p>Hierarchy &amp; Output Layers. A fixed hierarchy enforces ordered rule evaluation: a rule  activates
only if all previous rules are inactive. The final neuron acts as a default rule. Only one rule fires per
sample, and a softmax assigns its associated class label.</p>
        <p>Loss Function. The regularization loss is defined as ℒsparse = ∑︀,  ︁( loc−−  )︁ , where loc are the
logits controlling the binary mask sampling, and ,  are stretch parameters controlling the thresholding
behavior. This penalty encourages many weights to converge to zero and thus shortens the learned
rules. The training optimizes the loss based on the cross-entropy loss, the sparsity and L2 regularization
terms:</p>
        <p>ℒ = ℒCE +  1ℒsparse +  2‖Wout‖22</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Comparison to Baselines</title>
        <p>
          We compare RL-Net to MIRA [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], a white-box rule-based system using foundational and user-specific
rules, and XentricAI [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], a black-box GRU model explained post-hoc via SHAP.
        </p>
        <p>A summary of capabilities is shown in Table 1.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Dataset and Preprocessing</title>
      <sec id="sec-4-1">
        <title>4.1. FMCW-Radar Gesture Dataset</title>
        <p>The gesture dataset used in this study was collected using Infineon Technologies’ XENSIV ™
BGT60LTR13C 60 GHz FMCW radar operating in indoor environments. Twelve users performed
ifve predefined hand gestures ( SwipeLeft, SwipeRight, SwipeUp, SwipeDown, and Push) across six
diferent room types. A Background class was added to represent the absence of gestures. Each participant
completed 1,000 samples, totaling 31,000 gesture recordings.</p>
        <p>
          Recordings were captured over 100 frames per gesture using three receive antennas, with dimensions
[100 × 3 × 32 × 64] representing frames, antennas, chirps, and fast-time samples, respectively. The
dataset is publicly available via IEEE Dataport [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Frame-Level Gesture Labeling</title>
        <p>
          Following the method in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], we assign gesture labels based on the frame with minimum radial distance
(gesture anchor point). A fixed 10-frame window centered around this point defines the gesture duration,
while all other frames are labeled as background. This results in a labeled dataset with dimensions
[ ×  × ], where  is the number of samples,  the number of frames per sample, and  = 5 the
number of features.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Signal Processing and Feature Extraction</title>
        <p>
          We applied a lightweight, real-time-capable preprocessing pipeline based on [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] to extract
gesturerelevant features from the raw radar data. The main steps include: (i) Range FFT, applied after DC
removal to generate range profiles; (ii) Moving Target Indication (MTI), which removes static target
reflections; (iii) Target Localization, using peak detection in the range profile to find the closest
moving object (i.e., the hand); (iv) Doppler FFT, applied to the identified hand bin to extract radial
velocity; and (v) Angle Estimation, computing azimuth and elevation from antenna phase diferences.
        </p>
        <p>From this processing, we extract five features per frame: radial distance (range), radial velocity
(Doppler), azimuth angle, elevation angle, and signal magnitude. These were averaged over the ten
gesture frames to construct the input to the models.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental Setup</title>
      <p>To evaluate both the generalization capabilities and adaptability of RL-Net, we conducted two main
experiments: (i) optimized training of a user-agnostic pretrained model, and (ii) user-specific TL. Gesture
data from twelve users (12,000 samples) was used in total. Data from six users (6,000 samples) was used
to train and validate the base model, while data from the remaining six users was reserved for TL and
evaluation.</p>
      <p>Dataset Split. Six users (6,000 gestures) were used to pretrain all models; six diferent users (6,000
gestures) were held out for TL and testing.</p>
      <p>Training Protocol. An extensive hyperparameter search was performed using Grid Search, selecting
values based on prior work and empirical stability. We used the Adam optimizer with a learning rate of
0.01 and batch size of 40. Training was run for 200 epochs with early stopping on validation loss. A
batch normalization layer was added after the rule layer for improved stability. Sparsity was controlled
via a regularization weight ( 1,train = 0.025), while a higher weight ( 1,val = 0.3) was applied during
validation to promote simpler models. L2 regularization was disabled. Hard concrete parameters were
 = − 0.1,  = 1.1, and  = 23 .</p>
      <p>Custom Early Stopping Strategy. Standard early stopping based on validation cross-entropy often
favored overly complex models with minimal gains in accuracy. On the other hand, strong regularization
led to premature convergence and vanishing gradients. To balance interpretability and performance, we
introduced a custom early stopping criterion using an increased validation-time regularization weight
( 1,val = 0.3). This steered model selection toward sparser rule sets without compromising predictive
accuracy.</p>
      <p>Transfer Learning. For user adaptation, pretrained weights were partially frozen: rule masks and
output weights were updated while base rule weights were fixed. Each user had 1,000 samples (20% test,
64% train, 16% val).</p>
      <p>Evaluation Metrics. We report accuracy and F1 score, along with rule complexity measured by the
number of active rules and total rule conditions. These metrics are tracked before and after fine-tuning.</p>
      <p>
        Baselines. RL-Net was compared to MIRA [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and XentricAI [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Note that XentricAI performs
both gesture detection and classification by explicitly modeling a background class, so comparisons
should consider this broader output scope. For both baselines, we adopt the hyperparameters and
training protocols reported in their original publications.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Experimental Results and Discussion</title>
      <sec id="sec-6-1">
        <title>6.1. General Training and Model Robustness</title>
        <p>While we initially experimented with DR-Net by training independent rule sets for each gesture class, the
resulting rules were not mutually exclusive, making them unsuitable for robust multi-class classification.
As a result, we focused exclusively on RL-Net, which provides a hierarchical rule list enabling consistent
multi-class predictions.</p>
        <p>We evaluate the model’s robustness and interpretability across three architectural variants: the
original RL-Net, RL-Net with batch normalization, and RL-Net with both batch normalization and a
modified validation loss incorporating regularization weight  1,val. Results, averaged over four runs per
configuration with consistent seeding, are summarized in Table 2.</p>
        <p>Introducing batch normalization significantly improved both performance stability and model
simplicity. Further incorporating validation-time regularization reduced model complexity even more,
with only a slight trade-of in F1 score. The reduction in standard deviation across runs also indicates
improved training consistency.</p>
        <p>Optimization Bottleneck from Rule Ordering and Vanishing Gradients. While training
improves overall sparsity (Figure 3, Panel A), we observed a persistent structural issue: RL-Net’s fixed
hierarchy layer causes early rules to being suppressed and pruned. As shown in Panel B, this results
in only high-index rules remaining active, often with long, complex conditions, compromising rule
diversity and interpretability. This bottleneck stems from the fixed top-down rule evaluation, which
prioritizes later rules during optimization. Addressing this may require adaptive or learnable rule
ordering, ideally without increasing model complexity to maintain suitability for edge deployment.</p>
        <p>Comparison with Baselines. As a reference, the white-box model MIRA achieves an F1 score of
79.7%, while the GRU-based classification backbone of XentricAI reaches 85%. RL-Net demonstrates a
compelling trade-of: It achives strong performance ( ∼ 90%) while maintaining interpretable, compact
rule lists. This positions RL-Net efectively between the extremes of full transparency and pure black-box
modeling.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Transfer Learning</title>
        <p>For these experiments, we initialized the model using a randomly selected baseline model consisting of
seven rules and 36 total conditions.</p>
        <p>Table 3 summarizes the classification performance and rule complexity before and after TL. All
results are reported for the best-performing epoch on the validation set, using accuracy and F1-score as
primary metrics, and the number of rules and total rule conditions as proxies for interpretability.</p>
        <p>Across all users, TL consistently improved model performance, with F1-score increases ranging
from modest (+0.71% for user3) to substantial (+9.08% for user1). Importantly, these gains were
accompanied by a consistent reduction in rule complexity. All adapted models converged to five rules,
and in several cases, the total number of rule conditions decreased significantly, highlighting the dual
benefit of enhanced personalization and improved interpretability.
Comparison with Baselines. RL-Net achieved an average user-specific performance of 93.05%,
surpassing the fine-tuned XentricAI model, which reached 90.2%. However, it is important to note that
XentricAI includes a Background class and performs both gesture detection and classification at the
frame level, an extended functionality that goes beyond RL-Net’s current scope. MIRA achieved the
highest average accuracy of 94.9% after user-specific calibration. While deterministic, MIRA relies on
handcrafted rule tuning and lacks the learning flexibility of neural approaches.</p>
        <p>Overall, RL-Net strikes a favorable balance between interpretability and performance, ofering
greater modeling flexibility than MIRA and better classification accuracy than XentricAI. That said, one
limitation may lie in the fixed thresholds used for binarizing continuous features, which could constrain
the model’s adaptability to subtle user-specific variations. Future research should explore adaptive
thresholding or learnable binarization strategies to further improve generalization and personalization.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Limitations and Future Work</title>
        <p>RL-Net inherits optimization challenges from DR-Net, notably vanishing gradients caused by
multiplicative logical activations. This leads to early-rule pruning and dominance of later, often longer rules,
limiting generalization. Fixed rule ordering further amplifies this by prioritizing high-indexed neurons
and suppressing earlier ones.</p>
        <p>
          While batch normalization improved stability, training remains inconsistent across runs. Attempts
like L2 regularization worsened gradient flow and performance. Though more advanced techniques (e.g.,
HyperLogic [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]) exist, they add model complexity, compromising interpretability and edge-suitability.
        </p>
        <p>Future work should explore gradient-stable activations, adaptive thresholding for input binarization,
and learnable rule hierarchies. Physics-informed constraints tailored to gesture kinematics may further
improve robustness without added complexity. These insights extend beyond HGR, pointing to broader
improvements in neuro-symbolic learning.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This work presents the first real-world application of RL-Net, a neuro-symbolic model, to radar-based
HGR. RL-Net achieves a strong balance between interpretability and performance, outperforming fully
transparent models and approaching the accuracy of black-box methods.</p>
      <p>Through optimized training and TL, we demonstrate RL-Net’s potential for user-adaptive, explainable
gesture sensing. However, challenges in optimization and rule hierarchy remain. Addressing these
without increasing model complexity is key to advancing interpretable AI for edge deployment and
beyond.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author has not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ohn-Bar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. M.</given-names>
            <surname>Trivedi</surname>
          </string-name>
          ,
          <article-title>Hand gesture recognition in real time for automotive interfaces: A multimodal vision-based approach and evaluations</article-title>
          ,
          <source>IEEE transactions on intelligent transportation systems 15</source>
          (
          <year>2014</year>
          )
          <fpage>2368</fpage>
          -
          <lpage>2377</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N. Al</given-names>
            <surname>Mudawi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ansar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alazeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aljuaid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>AlQahtani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Algarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jalal</surname>
          </string-name>
          , H. Liu,
          <article-title>Innovative healthcare solutions: robust hand gesture recognition of daily life routines using 1d cnn</article-title>
          ,
          <source>Frontiers in Bioengineering and Biotechnology</source>
          <volume>12</volume>
          (
          <year>2024</year>
          )
          <fpage>1401803</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hong-An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Guo-Zhong</surname>
          </string-name>
          ,
          <article-title>Gesture interaction in virtual reality</article-title>
          ,
          <source>Virtual Reality &amp; Intelligent Hardware</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>84</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Strobel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schoenfeldt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Daugalas</surname>
          </string-name>
          ,
          <article-title>Gesture recognition for fmcw radar on the edge</article-title>
          ,
          <source>in: 2024 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>48</lpage>
          . doi:
          <volume>10</volume>
          .1109/WiSNeT59910.
          <year>2024</year>
          .
          <volume>10438579</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>Wu, mmgesture: Semi-supervised gesture recognition system using mmwave radar</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>213</volume>
          (
          <year>2023</year>
          )
          <fpage>119042</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey on self-interpretable neural networks</article-title>
          ,
          <source>arXiv preprint arXiv:2501.15638</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dierckx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Veroneze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nijssen</surname>
          </string-name>
          , Rl-net:
          <article-title>Interpretable rule learning with neural networks</article-title>
          ,
          <source>in: Pacific-Asia Conference on Knowledge Discovery and Data Mining</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>95</fpage>
          -
          <lpage>107</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frasca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. La</given-names>
            <surname>Torre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pravettoni</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Cutica</surname>
          </string-name>
          ,
          <article-title>Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review</article-title>
          ,
          <source>Discover Artificial Intelligence</source>
          <volume>4</volume>
          (
          <year>2024</year>
          )
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Gardin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gautier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goix</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ndiaye</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-M. Schertzer</surname>
          </string-name>
          , Skoperules, https://github.com/ scikit-learn-contrib/skope-rules,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Seifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sukianto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Carbonelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Servadei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          ,
          <article-title>Interpretable rule-based system for radarbased gesture sensing: Enhancing transparency and personalization in ai</article-title>
          ,
          <source>in: 2024 21st European Radar Conference (EuRAD)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>156</fpage>
          -
          <lpage>159</lpage>
          . doi:
          <volume>10</volume>
          .23919/EuRAD61604.
          <year>2024</year>
          .
          <volume>10734943</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Franceschini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ambrosanio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vitale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baselice</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gifuni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Grassini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pascazio</surname>
          </string-name>
          ,
          <article-title>Hand gesture recognition via radar sensors and convolutional neural networks</article-title>
          ,
          <source>in: 2020 IEEE Radar Conference (RadarConf20)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Interference-robust millimeter-wave radar-based dynamic hand gesture recognition using 2d cnn-transformer networks</article-title>
          ,
          <source>IEEE Internet of Things Journal</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Janizek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.-I. Lee</surname>
          </string-name>
          ,
          <article-title>True to the model or true to the data?</article-title>
          , arXiv preprint arXiv:
          <year>2006</year>
          .
          <volume>16234</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Seifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sukianto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Strobel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Carbonelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Servadei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          ,
          <article-title>Xentricai: A gesture sensing calibration approach through explainable and user-centric ai</article-title>
          ,
          <source>in: World Conference on Explainable Artificial Intelligence</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>232</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. O</given-names>
            <surname>'Neill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Chen,
          <string-name>
            <given-names>P.</given-names>
            <surname>Im</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. DeGraw</surname>
          </string-name>
          ,
          <article-title>Grey-box modeling and application for building energy simulations-a critical review</article-title>
          ,
          <source>Renewable and Sustainable Energy Reviews</source>
          <volume>146</volume>
          (
          <year>2021</year>
          )
          <fpage>111174</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Qiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Learning accurate and interpretable decision rule sets from neural networks</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>35</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>4303</fpage>
          -
          <lpage>4311</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Hyperlogic: Enhancing diversity and accuracy in rule learning with hypernets</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>37</volume>
          (
          <year>2024</year>
          )
          <fpage>3564</fpage>
          -
          <lpage>3587</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Louizos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Welling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Kingma</surname>
          </string-name>
          ,
          <article-title>Learning sparse neural networks through _0 regularization</article-title>
          , arXiv preprint arXiv:
          <volume>1712</volume>
          .01312 (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Seifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sukianto</surname>
          </string-name>
          , C. Carbonelli,
          <volume>60</volume>
          ghz fmcw radar gesture dataset,
          <year>2024</year>
          . URL: https://dx.doi. org/10.21227/s12w-cc46.
          <source>doi:10</source>
          .21227/s12w-cc46.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>