<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Artificial Intelligence for Climate Change, Italian Workshop on Planning
and Scheduling, RCRA Workshop on Experimental evaluation of algorithms for solving problems with combinatorial explosion,
and SPIRIT Workshop on Strategies, Prediction, Interaction, and Reasoning in Italy. November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Anomaly Recognition with Trustworthy Neural Networks: a Case Study in Elevator Control</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dario Guidotti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Pandolfo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Pulina</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Sassari</institution>
          ,
          <addr-line>Piazza Università 21, Sassari, 07100</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>5</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>In the realm of contemporary industrial control systems, the necessity for robust anomaly detection and classification is of critical importance. This paper presents an application of neural network technology in a real-world industrial scenario focused on elevator control. We employ two fully-connected neural networks to accomplish both anomaly detection and classification. The first neural network is dedicated to identifying types of anomalies, while the second predicts their magnitudes. Additionally, we integrate formal verification to certify the local robustness of these networks. Our findings not only showcase the practical eficacy of our methodology but also emphasise the crucial role of small neural networks in efectively addressing challenges within industrial settings.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Anomaly Detection</kwd>
        <kwd>Neural Networks</kwd>
        <kwd>Formal Verification</kwd>
        <kwd>Trustworthy AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        of the networks, formal verification techniques [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref4 ref5 ref6 ref7 ref8 ref9">4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31</xref>
        ] can be applied to validate their local robustness. This
verification process ensures that the networks’ behaviour remains stable within a predefined
input space, even when subjected to adversarial perturbations.
• Real-life Application in Elevator Control: The viability and eficacy of this methodology are
showcased through a practical examination in elevator control. Through the application of NN
technology in this authentic industrial setting, our objective is to highlight the significance of
our approach in tackling the challenges inherent in contemporary industrial automation.
      </p>
      <p>Our aim is to contribute to the continual improvement of the safety, reliability, and eficiency of
elevator control systems. Through the utilisation of NNs and formal verification techniques, we present
a sturdy framework for detecting and classifying anomalies, readily applicable in industrial settings.
This guarantees the sustained safe and reliable operation of elevators across diverse and dynamic
environments. Moreover, through our experimental evaluation, we demonstrate that even simple and
compact network architectures remain pertinent in real-world industrial scenarios, thereby afirming
the efectiveness of existing verification techniques within this domain.</p>
      <p>The rest of the paper is structured as follows. In Section 2 we introduce some basic concepts and
definitions, while in Section 3 we provide an overview of the related works. In Section 4 we present
the use case of interest and the related dataset and in Section 5 we present our models and the specific
of the properties considered in the experimental evaluation. In Section 6 we present the results of
our experimental evaluation. Finally, in Section 7 we present our conclusions and some of our future
research plans.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>2.1. Neural Networks</title>
        <p>A NN constitutes an intricate structure of computing units, commonly referred to as “neurons”. In
the context of feed-forward NNs, these neurons are systematically organised into sequential layers.
Each neuron within a layer exclusively connects to those in the subsequent layer, forming a sequential
lfow of information. In a computational sense, a feed-forward NN with  layers can be defined as a
function  : R → R, where  () is calculated iteratively as  () = (− 1(...1()...)). Here,
1 : R → R1 , . . . ,  : R− 1 → R represent the functions corresponding to the layers of the
network. Notably, the first layer, 1, is known as the input layer, the final layer, , serves as the output
layer, while any intermediate layers are commonly referred to as hidden layers. We primarily focus on
two types of layers:
• Afine Layers : These layers implement linear transformations, denoted as  : R → R, and are
defined by the equation  () =   + . Here,  is a weight matrix in R× , and  is a bias
vector in R. Afine layers are fundamental for processing and transforming input data.
• ReLU Layers: Introducing non-linearity to the NN, ReLU (Rectified Linear Unit) layers are
characterised by the function  : R → R, defined as  () = (0, ). This element-wise operation
enhances the network’s capability to model complex relationships within the data.</p>
        <p>In this study, NNs have a dual role, addressing both regression and classification tasks, each with
distinct objectives. For regression, the NN’s aim is to approximate an unknown functional mapping
from  : R → R. This involves predicting continuous output values based on input data. In our
context, this translates to the task of estimating anomaly magnitudes using sensor measurements. In
the case of classification, the focus shifts to categorising input data into predefined classes or categories.
In this work, classification involves the task of identifying anomaly categories based on sensor data.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Formal Verification</title>
        <p>
          The primary objective of formal verification is to prove that a NN’s behaviour aligns with specific
properties. In particular, our focus is on input-output specifications. These specifications assume that
given a precondition on the input, a corresponding postcondition must hold for the output. For a
practical example of this kind of property we refer to Subsection 5.2. In recent years, considerable
progress has been made in methodologies and tools aimed at addressing verification tasks, as detailed
in [
          <xref ref-type="bibr" rid="ref19 ref20">32, 33</xref>
          ]. We present a succinct overview of state-of-the-art verification techniques based on the
categorisation proposed in [
          <xref ref-type="bibr" rid="ref19">32</xref>
          ]. This classification is derived from the type of guarantees these
techniques ofer. Deterministic guarantees provide precise information about whether a property is satisfied.
Methodologies in this category transform the verification problem into a set of constraints, which are
then solved using appropriate solvers. Noteworthy examples include methodologies presented in [
          <xref ref-type="bibr" rid="ref21">34, 6</xref>
          ],
leveraging SMT solvers, and those in [
          <xref ref-type="bibr" rid="ref22">11, 12, 35</xref>
          ], using Mixed Integer Linear Programming (MILP)
solvers. Some of the latest methodologies enhance these solvers with techniques like branch-and-bound
(BnB). One-sided guarantees ofer either lower or upper bounds for a variable, serving as suficient
conditions for a property to hold. Approaches in this category can efectively verify larger models and are less
susceptible to numerical instability, a concern in methods relying on floating-point arithmetic. These
methodologies make use of technologies such as abstract interpretation [
          <xref ref-type="bibr" rid="ref23 ref9">22, 9, 10, 36</xref>
          ], convex
optimisation [
          <xref ref-type="bibr" rid="ref24">37</xref>
          ], interval analysis [5], symbolic interval propagation [
          <xref ref-type="bibr" rid="ref25">38</xref>
          ], and linear approximation [
          <xref ref-type="bibr" rid="ref26 ref27">39, 40</xref>
          ].
Finally, converging guarantees ofer guarantees converging lower and upper bounds and are particularly
efective for verifying large models. They employ technologies such as layer-by-layer refinement and
analysis [41], reduction to a two-player turn-based game [42, 43], and global optimisation [44, 45].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Related Works</title>
      <p>In the quest for efective anomaly recognition systems, numerous methodologies and techniques have
been investigated and proposed. Considering the vastness of the literature corpus on this subject and
the extent of our work, we hereby provide an overview of some pertinent studies and delineate how
our work distinguishes itself from them.</p>
      <p>In a recent work [46], a novel neural network architecture integrating a variational autoencoder and
a generative adversarial network is proposed for detecting anomalies in software runtime execution
traces. In [47], the authors present an anomaly detection framework for spacecraft multivariate
timeseries data, employing temporal convolutional networks. Meanwhile, [48] proposes a recurrent neural
network architecture for identifying anomalies in electrocardiograms. Additionally, [49] introduces
a novel self-supervised attentive generative adversarial network for discerning unexpected events
in videos. Furthermore, [50] and [51] propose two distinct methodologies for anomaly detection in
Industrial Control Systems. The former combines a one-dimensional convolutional neural network with
a bidirectional long short-term memory network, while the latter utilises a lightweight long short-term
memory variational autoencoder to detect cyber-attacks.</p>
      <p>This brief overview illustrates the successful application of various neural network architectures
for anomaly detection across diverse domains. However, all these models tend to be complex, and
their reliability cannot be guaranteed using current state-of-the-art methodologies for neural network
verification. In contrast, our paper demonstrates that even simple, straightforward architectures can
achieve satisfactory performance levels when applied to specific real-world applications, such as elevator
control. Furthermore, we establish the feasibility of certifying the local robustness of such models using
state-of-the-art neural network verification tools.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Use Case: Elevator Control</title>
      <p>IMOCO4.E [52, 53] is a Key Digital Technologies Joint Undertaking (KTD JU) project that commenced
in September 2021. The project involves the collaboration of 46 partners spanning 13 countries and
is dedicated to advancing mechatronic systems by augmenting their intelligence and adaptability.
This objective will be realised through the integration of state-of-the-art technologies, encompassing
innovative sensor data, model-based methods, AI, machine learning (ML), and principles of the industrial
Internet of Things (IoT). By leveraging these advanced technologies, the project aims to catalyse the
evolution of European manufacturing towards Industry 4.0, facilitating the perception and management
of intricate machinery and robotics. The project’s focal point is the delivery of a reference architecture,
subject to testing and validation across diverse industrial domains through various use cases and
pilot projects. These applications span sectors such as packaging, industrial robotics, healthcare, and
semiconductor manufacturing. In our research, we concentrate on a specific use case related to the
application of NNs in the realm of elevator predictive maintenance. Elevators, being intricate machines,
necessitate regular maintenance to ensure their safe and reliable operation. The control of elevator
movement in tall buildings poses substantial challenges due to non-linearities, uncertainties, and the
dynamics of time-varying systems. By harnessing NNs for predictive maintenance, potential issues can
be identified before they escalate into breakdowns or pose safety risks. This proactive approach not
only minimises downtime but also amplifies the overall performance and safety of elevators.</p>
      <sec id="sec-4-1">
        <title>4.1. Dataset</title>
        <p>In this specific instance, we employed data generated from simulating an authentic elevator system.
Such simulation was developed and managed by Siemens, which is one of the company involved in the
project. This dataset consists of sensor measurements gathered during various journeys of the simulated
elevator. During these simulations, three distinct types of anomalies were deliberately introduced,
each varying in magnitude, by adjusting the simulation parameters. Specifically, the degradation of
the elevator’s pulley, its sliding motion, and damaged bearings were simulated as independent factors,
in addition to modelling the elevator’s standard behaviour. The data collected from the simulation
underwent preprocessing by domain experts at WEG, another partner of the project, to extract 187
pertinent features for each elevator journey between two diferent floors. As a result, the resulting
dataset presents a single data point with 187 features for each journey. The target measurements include
the classification of the anomaly type (either ageing, bearing or sliding) and the computation of the
performance index (PI) by WEG. This performance index indicates how much the elevator’s behaviour
deviates from its nominal state, with higher values signifying greater deviation. It is noteworthy that
these target measurements are treated separately: one model is trained for anomaly classification, while
another is designed to compute the performance index. It is essential to highlight that all numeric data,
including the PI, underwent normalization to fall within the range of 0 to 1.</p>
        <p>The complete dataset consists of 2517 samples, distributed among three classes: 112 in the ageing class,
820 in the bearing class, and 1585 in the sliding class. The substantial imbalance in sample numbers
across classes requires careful consideration during both training and testing phases. Additional
information on how this imbalance is addressed will be outlined in the following section.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Methodology</title>
      <sec id="sec-5-1">
        <title>5.1. Models</title>
        <p>We employed three distinct NN architectures for our tasks, maintaining a consistent design for both
anomaly classification and PI prediction. The models in consideration are fully-connected NNs featuring
ReLU activation functions and comprised of two hidden layers. The initial set of models includes 32
hidden neurons in the first layer and 16 in the second layer. The second set has 64 in the first layer and
32 in the second layer, while the third set comprises 128 in the first layer and 64 in the second layer. It
is important to highlight that the networks used for classification and those used for PI prediction difer
in their output layers. The classification models have three outputs corresponding to the three classes
of interest, while the PI prediction models possess a single output representing the predicted PI. We
designate the regression models as EA_[1-2], where 1 signifies the number of hidden neurons in
the first layer, and 2 denotes the same for the second layer. Similarly, the classification models follow
a parallel naming convention and are denoted as EAC_[1-2].</p>
        <p>
          To train the models for both regression and classification tasks, we followed a similar methodology.
In both instances, we utilised pyNeVer[
          <xref ref-type="bibr" rid="ref9">22</xref>
          ], a comprehensive tool for managing, training, and verifying
NNs. pyNeVer operates on the PyTorch[54] backend, ofering users an intuitive custom training
loop. Specifically, for both tasks, we employed the Adam optimizer with a learning rate set at 0.001
for 50 epochs. During training, we allocated 20% of the dataset for testing and 30% for validation.
Additionally, a batch size of 16 was used for training, and 8 for validation. In the regression task, we
adopted the standard mean square error (MSE) as both the training loss and the evaluation metric. For
the classification task, the selection of the loss function involved more experimentation, as detailed in
the previous section, owing to the significant class imbalance. To address this, we chose the weighted
version of the Cross Entropy loss function, with weights inversely proportional to the number of
samples in each class within the training set. We also implemented a procedure to ensure an equal
representation of samples from each class in both the training and test sets. To evaluate the algorithm’s
performance on the test set, we computed accuracy for predicting each class individually and for all
classes combined.
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Properties</title>
        <p>All our models underwent evaluation for similar local robustness properties. Specifically, for the
classification models, we assessed their ability to withstand adversarial examples. Adversarial examples
are inputs that have been subtly perturbed to mislead a machine learning model into making incorrect
predictions. In our evaluation, this property was formally expressed as:
∀.∀.(| − ˆ|∞ ≤  ∧  =  ()) ⇒ () = ˆ
(1)
In simpler terms, this means that when a slight perturbation within the  Chebyshev norm is applied to
a given input sample ˆ, the NN must still predict the same class as it would for the original, unperturbed
input.</p>
        <p>For the regression models, we examined a related property:</p>
        <p>∀.∀.(| − ˆ|∞ ≤  ∧  =  ()) ⇒ | − ˆ|∞ &lt; 
In this case, when the input is perturbed within the  Chebyshev norm, the corresponding output must
remain within a specified range bounded by the constant  .</p>
        <p>These formulations, however, are challenging to verify due to the universal quantification involved.
Consequently, we reformulated the safety properties in their negated versions:
(2)
(3)
(4)
and
∃.∃.(| − ˆ|∞ ≤  ∧  =  ()) ⇒ () ̸= ˆ</p>
        <p>∃.∃.(| − ˆ|∞ ≤  ∧  =  ()) ⇒ | − ˆ|∞ &gt; 
If a verification tool can confirm that the negated versions of these properties do not hold, it certifies
that the network under scrutiny adheres to the safety property.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Experimental Results</title>
      <p>The experiments were carried out on a MacBook Air laptop, equipped with 24 GB of RAM and an Apple
M2 CPU. The operating system utilised was macOS Sonoma 14.1.1, MPS was employed for training the
NNs. For verifying the properties of interest, we utilised the pyNeVer verification tool, employing the
over-approximate algorithm. The code necessary to reproduce our experiments is available at [55].</p>
      <p>All NNs assessed in our experiments demonstrated adequate levels of precision for their respective
tasks. In Table 1, we present the Mean Squared Error (MSE) calculated on the test set for the regression
models under consideration. It is noteworthy that, even in the worst-case scenario, the MSE remains
below 1.5 × 10− 2. Table 2 reports both the overall accuracy on the test set and the specific accuracy
pertaining to each class. The relative accuracy signifies the proportion of correctly predicted samples
belonging to a particular class among all the samples in that class. We introduced this additional
performance metric to address the considerable class imbalance in the dataset. Interestingly, the
complexity of the models does not necessarily correlate with their performance and precision. Even our
smallest networks exhibited satisfactory performance for the given task. This emphasises the idea that,
while larger NNs are gaining popularity, smaller models can still hold relevance in real-life industrial
applications.</p>
      <p>Regarding the formal verification of the models, Tables 1 and 2 illustrate that pyNeVer successfully
verified the property of interest for all our models within a reasonable time frame. Notably, the size of
the input space considered by the property appears to be as critical as the complexity of the models
themselves in determining the verification time required by the tool. Additionally, the regression models
seemed to be less robust to adversarial perturbations, as pyNeVer couldn’t ensure the safety of any
model. In other words, it consistently identified at least one data point that contradicted the safety
property of interest. Conversely, the classification models exhibited greater resilience to adversarial
perturbations, as pyNeVer was able to certify the safety of these models for various magnitudes of
perturbations considered.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions and Future Work</title>
      <p>In this work, we harnessed NN technology to tackle practical challenges in the elevator control domain,
as part of the IMOCO4.E project. Our comprehensive approach covered both anomaly detection and
magnitude prediction, employing fully-connected NNs for classification and regression tasks. Through
experimentation, we showcased that even moderately-sized NN architectures delivered satisfactory
precision and performance, emphasising the continued relevance of smaller models in practical industrial
applications. Additionally, we explored the crucial aspect of local robustness, assessing the models’
resistance to adversarial perturbations. Our findings, validated using the pyNeVer tool, underscored
the tool’s efectiveness in certifying safety properties for our models within reasonable time frames.</p>
      <p>Our future endeavours will primarily concentrate on extending the experimental evaluation within
the elevator control domain. We aim to enrich our dataset by introducing additional anomaly classes to
the classification task, thereby augmenting the diversity and complexity of the data. This expansion
will allow us to assess our models’ performance across a broader spectrum of anomaly scenarios. To
bolster the robustness of our regression models against adversarial perturbations, we plan to explore
advanced techniques such as adversarial training, input preprocessing, and repair [56, 57, 58, 59, 60].
This initiative aims to further enhance the reliability and safety of our predictive maintenance systems
for elevators in practical settings. By persistently refining our experiments and delving into these
avenues, our goal is to ofer more comprehensive solutions and insights for leveraging trustworthy
NNs in elevator control and predictive maintenance.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments References</title>
      <p>This work was supported by the H2020 ECSEL JU grant agreement No. 101007311 “Intelligent Motion
Control under Industry4.E” (IMOCO4.E) project. The support is gratefully acknowledged.
sol, Cyprus, October 9-13, 2023, IEEE, 2023, pp. 1–2. doi:10.1109/E-SCIENCE58273.2023.
10254890.
[4] C. Ferrari, M. N. Müller, N. Jovanovic, M. T. Vechev, Complete verification via multi-neuron
relaxation guided branch-and-bound, in: The Tenth International Conference on Learning
Representations, ICLR 2022, Virtual Event, April 25-29, 2022, OpenReview.net, 2022.
[5] S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C. Hsieh, J. Z. Kolter, Beta-crown: Eficient bound
propagation with per-neuron split constraints for neural network robustness verification, in:
Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information
Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, Curran Associates, Inc.,
2021, pp. 29909–29921.
[6] G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljic,
D. L. Dill, M. J. Kochenderfer, C. W. Barrett, The marabou framework for verification and analysis
of deep neural networks, in: Computer Aided Verification - 31st International Conference, CAV
2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I, volume 11561 of Lecture Notes
in Computer Science, Springer, Cham., 2019, pp. 443–452.
[7] S. Bak, H. Tran, K. Hobbs, T. T. Johnson, Improved geometric path enumeration for verifying relu
neural networks, in: Computer Aided Verification - 32nd International Conference, CAV 2020, Los
Angeles, CA, USA, July 21-24, 2020, Proceedings, Part I, volume 12224 of Lecture Notes in Computer
Science, Springer, Cham., 2020, pp. 66–96.
[8] P. Kouvaros, T. Kyono, F. Leofante, A. Lomuscio, D. D. Margineantu, D. Osipychev, Y. Zheng,
Formal analysis of neural network-based systems in the aircraft domain, in: Formal Methods - 24th
International Symposium, FM 2021, Virtual Event, November 20-26, 2021, Proceedings, volume
13047 of Lecture Notes in Computer Science, Springer, Cham., 2021, pp. 730–740.
[9] G. Singh, T. Gehr, M. Püschel, M. T. Vechev, An abstract domain for certifying neural networks,</p>
      <p>Proc. ACM Program. Lang. 3 (2019) 41:1–41:30.
[10] H. Tran, X. Yang, D. M. Lopez, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, T. T. Johnson, NNV: the
neural network verification tool for deep neural networks and learning-enabled cyber-physical
systems, in: Computer Aided Verification - 32nd International Conference, CAV 2020, Los Angeles,
CA, USA, July 21-24, 2020, Proceedings, Part I, volume 12224 of Lecture Notes in Computer Science,
Springer, Cham., 2020, pp. 3–17.
[11] P. Henriksen, A. R. Lomuscio, Eficient neural network verification via adaptive refinement and
adversarial search, in: ECAI 2020 - 24th European Conference on Artificial Intelligence, 29
August8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including
10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), volume 325 of
Frontiers in Artificial Intelligence and Applications , IOS Press, 2020, pp. 2513–2520.
[12] P. Henriksen, A. Lomuscio, DEEPSPLIT: an eficient splitting method for neural network
verification via indirect efect analysis, in: Proceedings of the Thirtieth International Joint Conference on
Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, ijcai.org,
2021, pp. 2549–2555.
[13] D. Guidotti, L. Pandolfo, L. Pulina, Leveraging satisfiability modulo theory solvers for verification
of neural networks in predictive maintenance applications, Inf. 14 (2023) 397. doi:10.3390/
INFO14070397.
[14] S. Demarchi, D. Guidotti, A. Pitto, A. Tacchella, Formal verification of neural networks: A case
study about adaptive cruise control, in: Proceedings of the 36th ECMS International Conference
on Modelling and Simulation, ECMS 2022, Ålesund, Norway, May 30 - June 3, 2022, European
Council for Modeling and Simulation, 2022, pp. 310–316. doi:10.7148/2022-0310.
[15] D. Guidotti, Enhancing neural networks through formal verification, in: Discussion and Doctoral
Consortium papers of AI*IA 2019 - 18th International Conference of the Italian Association
for Artificial Intelligence, Rende, Italy, November 19-22, 2019, volume 2495 of CEUR Workshop
Proceedings, CEUR-WS.org, 2019, pp. 107–112.
[16] L. Pandolfo, L. Pulina, S. Vuotto, Smt-based consistency checking of configuration-based
components specifications, IEEE Access 9 (2021) 83718–83726. URL: https://doi.org/10.1109/ACCESS.
[41] X. Huang, M. Kwiatkowska, S. Wang, M. Wu, Safety verification of deep neural networks, in:
Computer Aided Verification - 29th International Conference, CAV 2017, Heidelberg, Germany,
July 24-28, 2017, Proceedings, Part I, volume 10426 of Lecture Notes in Computer Science, Springer,
Cham., 2017, pp. 3–29.
[42] M. Wu, M. Wicker, W. Ruan, X. Huang, M. Kwiatkowska, A game-based approximate verification
of deep neural networks with provable guarantees, Theor. Comput. Sci. 807 (2020) 298–329.
[43] M. Wicker, X. Huang, M. Kwiatkowska, Feature-guided black-box safety testing of deep neural
networks, in: Tools and Algorithms for the Construction and Analysis of Systems - 24th
International Conference, TACAS 2018, Held as Part of the European Joint Conferences on Theory and
Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings, Part I,
volume 10805 of Lecture Notes in Computer Science, Springer, Cham., 2018, pp. 408–426.
[44] W. Ruan, X. Huang, M. Kwiatkowska, Reachability analysis of deep neural networks with provable
guarantees, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial
Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, ijcai.org, 2018, pp. 2651–2659.
[45] W. Ruan, M. Wu, Y. Sun, X. Huang, D. Kroening, M. Kwiatkowska, Global robustness evaluation
of deep neural networks with provable guarantees for the hamming distance, in: Proceedings of
the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao,
China, August 10-16, 2019, ijcai.org, 2019, pp. 5944–5952.
[46] S. Kong, J. Ai, M. Lu, Y. Gong, GRAND: gan-based software runtime anomaly detection method
using trace information, Neural Networks 169 (2024) 365–377. doi:10.1016/J.NEUNET.2023.
10.036.
[47] L. Liu, L. Tian, Z. Kang, T. Wan, Spacecraft anomaly detection with attention temporal convolution
networks, Neural Comput. Appl. 35 (2023) 9753–9761. doi:10.1007/S00521-023-08213-9.
[48] A. Minic, L. Jovanovic, N. Bacanin, C. Stoean, M. Zivkovic, P. Spalevic, A. Petrovic, M. Dobrojevic,
R. Stoean, Applying recurrent neural networks for anomaly detection in electrocardiogram sensor
data, Sensors 23 (2023) 9878. doi:10.3390/S23249878.
[49] C. Huang, J. Wen, Y. Xu, Q. Jiang, J. Yang, Y. Wang, D. Zhang, Self-supervised attentive generative
adversarial networks for video anomaly detection, IEEE Trans. Neural Networks Learn. Syst. 34
(2023) 9389–9403. doi:10.1109/TNNLS.2022.3159538.
[50] X. Zhao, L. Zhang, Y. Cao, K. Jin, Y. Hou, Anomaly detection approach in industrial control systems
based on measurement data, Inf. 13 (2022) 450. doi:10.3390/INFO13100450.
[51] D. Fährmann, N. Damer, F. Kirchbuchner, A. Kuijper, Lightweight long short-term memory
variational auto-encoder for multivariate time series anomaly detection in industrial control
systems, Sensors 22 (2022) 2886. doi:10.3390/S22082886.
[52] M. Cech, A. Beltman, K. Ozols, Digital twins and AI in smart motion control applications, in: 27th
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2022,
Stuttgart, Germany, September 6-9, 2022, IEEE, 2022, pp. 1–7.
[53] S. Mohamed, G. van der Veen, H. Kuppens, M. Vierimaa, T. Kanellos, H. Stoutjesdijk, R. Masiero,
K. Määttä, J. W. van der Weit, G. Ribeiro, A. Bergmann, D. Colombo, J. Arenas, A. Keary, M. Goubej,
B. Rouxel, P. Kilpeläinen, R. Kadikis, M. Armendia, P. Blaha, J. Stokkermans, M. Cech, A. Beltman,
The IMOCO4.E reference framework for intelligent motion control systems, in: 28th IEEE
International Conference on Emerging Technologies and Factory Automation, ETFA 2023, Sinaia, Romania,
September 12-15, 2023, IEEE, 2023, pp. 1–8. doi:10.1109/ETFA54631.2023.10275410.
[54] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein,
L. Antiga, A. Desmaison, A. Köpf, E. Z. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy,
B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An imperative style, high-performance deep
learning library, in: Advances in Neural Information Processing Systems 32: Annual Conference
on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver,
BC, Canada, 2019, pp. 8024–8035.
[55] D. Guidotti, RCRA-2024-IMOCO4E-UC1, https://github.com/darioguidotti/</p>
      <p>RCRA-2024-IMOCO4E-UC1, 2024. Accessed: 2024-09-06.
[56] P. Henriksen, F. Leofante, A. Lomuscio, Repairing misclassifications in neural networks using
limited data, in: SAC ’22: The 37th ACM/SIGAPP Symposium on Applied Computing, Virtual
Event, April 25 - 29, 2022, ACM, 2022, pp. 1031–1038.
[57] M. Sotoudeh, A. V. Thakur, Provable repair of deep neural networks, in: PLDI ’21: 42nd ACM
SIGPLAN International Conference on Programming Language Design and Implementation,
Virtual Event, Canada, June 20-25, 2021, ACM, 2021, pp. 588–603.
[58] B. Goldberger, G. Katz, Y. Adi, J. Keshet, Minimal modifications of deep neural networks using
verification, in: LPAR 2020: 23rd International Conference on Logic for Programming, Artificial
Intelligence and Reasoning, Alicante, Spain, May 22-27, 2020, volume 73 of EPiC Series in Computing,
EasyChair, 2020, pp. 260–278.
[59] D. Guidotti, F. Leofante, C. Castellini, A. Tacchella, Repairing learned controllers with convex
optimization: A case study, in: Integration of Constraint Programming, Artificial Intelligence,
and Operations Research - 16th International Conference, CPAIOR 2019, Thessaloniki, Greece,
June 4-7, 2019, Proceedings, volume 11494 of Lecture Notes in Computer Science, Springer, 2019, pp.
364–373. doi:10.1007/978-3-030-19212-9\_24.
[60] D. Guidotti, F. Leofante, Repair of convolutional neural networks using convex optimization:
Preliminary experiments, in: Proceedings of the Cyber-Physical Systems PhD Workshop 2019, an
event held within the CPS Summer School "Designing Cyber-Physical Systems - From concepts to
implementation", Alghero, Italy, September 23, 2019, volume 2457 of CEUR Workshop Proceedings,
CEUR-WS.org, 2019, pp. 18–28. URL: https://ceur-ws.org/Vol-2457/3.pdf.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Aineto</surname>
          </string-name>
          , R. De Benedictis,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maratea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mittelmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Monaco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Scala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Serina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Spegni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tosello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Umbrico</surname>
          </string-name>
          , M. Vallati (Eds.),
          <source>Proceedings of the International Workshop on Artificial Intelligence for Climate Change, the Italian workshop on Planning and Scheduling</source>
          , the RCRA Workshop on
          <article-title>Experimental evaluation of algorithms for solving problems with combinatorial explosion, and</article-title>
          the Workshop on Strategies, Prediction, Interaction, and
          <article-title>Reasoning in Italy (AI4CC-IPS-RCRA-SPIRIT 2024), co-located with 23rd International Conference of the Italian Association for Artificial Intelligence</article-title>
          (AIxIA
          <year>2024</year>
          ), CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Masiero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Vector reconstruction error for anomaly detection: Preliminary results in the IMOCO4</article-title>
          .
          <article-title>E project</article-title>
          ,
          <source>in: 28th IEEE International Conference on Emerging Technologies and Factory Automation</source>
          ,
          <string-name>
            <surname>ETFA</surname>
          </string-name>
          <year>2023</year>
          , Sinaia, Romania,
          <source>September 12-15</source>
          ,
          <year>2023</year>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/ETFA54631.
          <year>2023</year>
          .
          <volume>10275396</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Detection of component degradation: A study on autoencoderbased approaches</article-title>
          ,
          <source>in: 19th IEEE International Conference on e-Science, e-Science</source>
          <year>2023</year>
          ,
          <year>Limas2021</year>
          .3085911. doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3085911</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Eramo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Fanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zedda</surname>
          </string-name>
          ,
          <article-title>Verification of neural networks: Challenges and perspectives in the aidoart project (short paper)</article-title>
          ,
          <source>in: Proceedings of the 10th Italian workshop on Planning and Scheduling (IPS</source>
          <year>2022</year>
          ), RCRA Incontri E Confronti (RiCeRcA
          <year>2022</year>
          ), and the workshop on Strategies, Prediction, Interaction, and
          <article-title>Reasoning in Italy (SPIRIT 2022) co-located with 21st International Conference of the Italian Association for Artificial Intelligence</article-title>
          (AIxIA
          <year>2022</year>
          ),
          <source>November 28 - December 2</source>
          ,
          <year>2022</year>
          , University of Udine, Udine, Italy, volume
          <volume>3345</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <article-title>Safety analysis of deep neural networks</article-title>
          ,
          <source>in: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI</source>
          <year>2021</year>
          , Virtual Event / Montreal, Canada,
          <fpage>19</fpage>
          -27
          <source>August</source>
          <year>2021</year>
          ,
          <article-title>ijcai</article-title>
          .org,
          <year>2021</year>
          , pp.
          <fpage>4887</fpage>
          -
          <lpage>4888</lpage>
          . doi:
          <volume>10</volume>
          .24963/IJCAI.
          <year>2021</year>
          /675.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <article-title>Verification and repair of neural networks</article-title>
          ,
          <source>in: Thirty-Fifth AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI</source>
          <year>2021</year>
          ,
          <source>The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <source>February 2-9</source>
          ,
          <year>2021</year>
          , AAAI Press,
          <year>2021</year>
          , pp.
          <fpage>15714</fpage>
          -
          <lpage>15715</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V35I18.17854.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Leofante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <article-title>Verification and repair of neural networks: A progress report on convolutional models</article-title>
          ,
          <source>in: AI*IA 2019 - Advances in Artificial Intelligence - XVIIIth International Conference of the Italian Association for Artificial Intelligence</source>
          , Rende, Italy,
          <source>November 19-22</source>
          ,
          <year>2019</year>
          , Proceedings, volume
          <volume>11946</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2019</year>
          , pp.
          <fpage>405</fpage>
          -
          <lpage>417</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -35166-3\_
          <fpage>29</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Leofante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <article-title>Verification of neural networks: Enhancing scalability through pruning</article-title>
          ,
          <source>in: ECAI 2020 - 24th European Conference on Artificial Intelligence</source>
          ,
          <volume>29</volume>
          <fpage>August</fpage>
          -8
          <source>September</source>
          <year>2020</year>
          , Santiago de Compostela, Spain,
          <source>August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS</source>
          <year>2020</year>
          ), volume
          <volume>325</volume>
          <source>of Frontiers in Artificial Intelligence and Applications</source>
          , IOS Press,
          <year>2020</year>
          , pp.
          <fpage>2505</fpage>
          -
          <lpage>2512</lpage>
          . doi:
          <volume>10</volume>
          .3233/FAIA200384.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Tacchella, pynever: A framework for learning and verification of neural networks</article-title>
          ,
          <source>in: Automated Technology for Verification and Analysis - 19th International Symposium, ATVA</source>
          <year>2021</year>
          , Gold Coast,
          <string-name>
            <surname>QLD</surname>
          </string-name>
          , Australia,
          <source>October 18-22</source>
          ,
          <year>2021</year>
          , Proceedings, volume
          <volume>12971</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2021</year>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>363</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -88885-5\_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Demarchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <article-title>Counter-example guided abstract refinement for verification of neural networks</article-title>
          ,
          <source>in: Proceedings of the CPS Summer School PhD Workshop</source>
          2022 co
          <article-title>-located with 4th Edition of the CPS Summer School (CPS</article-title>
          <year>2022</year>
          ), Pula, Sardinia (Italy),
          <source>September 19-23</source>
          ,
          <year>2022</year>
          , volume
          <volume>3252</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <article-title>Verification of neural networks for safety and security-critical domains</article-title>
          ,
          <source>in: Proceedings of the 10th Italian workshop on Planning and Scheduling (IPS</source>
          <year>2022</year>
          ), RCRA Incontri E Confronti (RiCeRcA
          <year>2022</year>
          ), and the workshop on Strategies, Prediction, Interaction, and
          <article-title>Reasoning in Italy (SPIRIT 2022) co-located with 21st International Conference of the Italian Association for Artificial Intelligence</article-title>
          (AIxIA
          <year>2022</year>
          ),
          <source>November 28 - December 2</source>
          ,
          <year>2022</year>
          , University of Udine, Udine, Italy, volume
          <volume>3345</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Verifying neural networks with non-linear SMT solvers: a short status report</article-title>
          ,
          <source>in: 35th IEEE International Conference on Tools with Artificial Intelligence</source>
          ,
          <source>ICTAI</source>
          <year>2023</year>
          ,
          <article-title>Atlanta</article-title>
          ,
          <string-name>
            <surname>GA</surname>
          </string-name>
          , USA, November 6-
          <issue>8</issue>
          ,
          <year>2023</year>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>423</fpage>
          -
          <lpage>428</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICTAI59109.
          <year>2023</year>
          .
          <volume>00068</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Verification of nns in the IMOCO4</article-title>
          .
          <article-title>E project: Preliminary results</article-title>
          ,
          <source>in: 28th IEEE International Conference on Emerging Technologies and Factory Automation</source>
          ,
          <string-name>
            <surname>ETFA</surname>
          </string-name>
          <year>2023</year>
          , Sinaia, Romania,
          <source>September 12-15</source>
          ,
          <year>2023</year>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/ETFA54631.
          <year>2023</year>
          .
          <volume>10275345</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Verifying neural networks with SMT: an experimental evaluation</article-title>
          ,
          <source>in: 19th IEEE International Conference on e-Science, e-Science</source>
          <year>2023</year>
          , Limassol, Cyprus, October 9-
          <issue>13</issue>
          ,
          <year>2023</year>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          . doi:
          <volume>10</volume>
          .1109/E-SCIENCE58273.
          <year>2023</year>
          .
          <volume>10254877</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>S.</given-names>
            <surname>Demarchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <article-title>Supporting standardization of neural networks verification with VNNLIB and coconet</article-title>
          ,
          <source>in: Proceedings of the 6th Workshop on Formal Methods for ML-Enabled Autonomous Systems, FoMLAS@CAV</source>
          <year>2023</year>
          , Paris, France,
          <source>July 17-18</source>
          ,
          <year>2023</year>
          , volume
          <volume>16</volume>
          of Kalpa Publications in Computing, EasyChair,
          <year>2023</year>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .29007/5PDH.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Formal verification of neural networks: A "step zero" approach for vehicle detection</article-title>
          ,
          <source>in: Advances and Trends in Artificial Intelligence. Theory and Applications - 37th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE</source>
          <year>2024</year>
          ,
          <string-name>
            <given-names>Hradec</given-names>
            <surname>Kralove</surname>
          </string-name>
          ,
          <source>Czech Republic, July 10-12</source>
          ,
          <year>2024</year>
          , Proceedings, volume
          <volume>14748</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2024</year>
          , pp.
          <fpage>297</fpage>
          -
          <lpage>309</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-97-4677-4\_
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pandolfo</surname>
          </string-name>
          , L. Pulina,
          <article-title>Verifying autoencoders for anomaly detection in predictive maintenance</article-title>
          ,
          <source>in: Advances and Trends in Artificial Intelligence. Theory and Applications - 37th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE</source>
          <year>2024</year>
          ,
          <string-name>
            <given-names>Hradec</given-names>
            <surname>Kralove</surname>
          </string-name>
          ,
          <source>Czech Republic, July 10-12</source>
          ,
          <year>2024</year>
          , Proceedings, volume
          <volume>14748</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2024</year>
          , pp.
          <fpage>188</fpage>
          -
          <lpage>199</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-97-4677-4\_
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Leofante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Castellini</surname>
          </string-name>
          ,
          <article-title>Improving reliability of myocontrol using formal verification</article-title>
          ,
          <source>IEEE Transactions on Neural Systems and Rehabilitation Engineering</source>
          <volume>27</volume>
          (
          <year>2019</year>
          )
          <fpage>564</fpage>
          -
          <lpage>571</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNSRE.
          <year>2019</year>
          .
          <volume>2893152</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kroening</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ruan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          , E. Thamo,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <article-title>A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability</article-title>
          ,
          <source>Comput. Sci. Rev</source>
          .
          <volume>37</volume>
          (
          <year>2020</year>
          )
          <fpage>100270</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>F.</given-names>
            <surname>Leofante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Narodytska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <source>Automated verification of neural networks: Advances, challenges and perspectives</source>
          , CoRR abs/
          <year>1805</year>
          .09938 (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          ,
          <article-title>An abstraction-refinement approach to verification of artificial neural networks</article-title>
          , in: T.
          <string-name>
            <surname>Touili</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Cook</surname>
          </string-name>
          , P. B.
          <string-name>
            <surname>Jackson</surname>
          </string-name>
          (Eds.), Computer Aided Verification, 22nd International Conference, CAV 2010,
          <article-title>Edinburgh</article-title>
          ,
          <string-name>
            <surname>UK</surname>
          </string-name>
          ,
          <source>July 15-19</source>
          ,
          <year>2010</year>
          . Proceedings, volume
          <volume>6174</volume>
          of Lecture Notes in Computer Science, Springer, Cham.,
          <year>2010</year>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>257</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bunel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Turkaslan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H. S.</given-names>
            <surname>Torr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Branch and bound for piecewise linear neural network verification</article-title>
          ,
          <source>J. Mach. Learn. Res</source>
          .
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <volume>42</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          :
          <fpage>39</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bak</surname>
          </string-name>
          ,
          <article-title>nnenum: Verification of relu neural networks with optimized abstraction refinement</article-title>
          ,
          <source>in: NASA Formal Methods - 13th International Symposium, NFM</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          , May
          <volume>24</volume>
          -28,
          <year>2021</year>
          , Proceedings, volume
          <volume>12673</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2021</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>36</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -76384-8\_2.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>E.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Kolter</surname>
          </string-name>
          ,
          <article-title>Provable defenses against adversarial examples via the convex outer adversarial polytope</article-title>
          ,
          <source>in: Proceedings of the 35th International Conference on Machine Learning</source>
          ,
          <string-name>
            <surname>ICML</surname>
          </string-name>
          <year>2018</year>
          , Stockholmsmässan, Stockholm, Sweden,
          <source>July 10-15</source>
          ,
          <year>2018</year>
          , volume
          <volume>80</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>5283</fpage>
          -
          <lpage>5292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>E.</given-names>
            <surname>Botoeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kouvaros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kronqvist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Misener</surname>
          </string-name>
          ,
          <article-title>Eficient verification of relu-based neural networks via dependency analysis</article-title>
          ,
          <source>in: The Thirty-Fourth AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI</source>
          <year>2020</year>
          , The Thirty-Second
          <source>Innovative Applications of Artificial Intelligence Conference</source>
          ,
          <source>IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI</source>
          <year>2020</year>
          , New York, NY, USA, February 7-
          <issue>12</issue>
          ,
          <year>2020</year>
          , AAAI Press,
          <year>2020</year>
          , pp.
          <fpage>3291</fpage>
          -
          <lpage>3299</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V34I04.5729.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Weng,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hsieh</surname>
          </string-name>
          , L. Daniel,
          <article-title>Eficient neural network robustness certification with general activation functions</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems</source>
          <year>2018</year>
          ,
          <article-title>NeurIPS 2018</article-title>
          , December 3-
          <issue>8</issue>
          ,
          <year>2018</year>
          , Montréal, Canada,
          <year>2018</year>
          , pp.
          <fpage>4944</fpage>
          -
          <lpage>4953</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>T.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hsieh</surname>
          </string-name>
          , L. Daniel,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Boning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. S.</given-names>
            <surname>Dhillon</surname>
          </string-name>
          ,
          <article-title>Towards fast computation of certified robustness for relu networks</article-title>
          ,
          <source>in: Proceedings of the 35th International Conference on Machine Learning</source>
          ,
          <string-name>
            <surname>ICML</surname>
          </string-name>
          <year>2018</year>
          , Stockholmsmässan, Stockholm, Sweden,
          <source>July 10-15</source>
          ,
          <year>2018</year>
          , volume
          <volume>80</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>5273</fpage>
          -
          <lpage>5282</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>