<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>J. Girard-Satabin); michele.alberti@cea.fr
(M. Alberti); francois.bobot@cea.fr (F. Bobot);
zakaria.chihani@cea.fr (Z. Chihani); augustin.lemesle@cea.fr
(A. Lemesle)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Julien Girard-Satabin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Alberti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>François Bobot</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zakaria Chihani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Augustin Lemesle</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Université Paris-Saclay, CEA, List</institution>
          ,
          <addr-line>F-91120, Palaiseau</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive language of the Why3 verification platform. Moreover, CAISAR orchestrates and composes state-of-the-art machine learning verification tools which, individually, are not able to efficiently handle all problems but, collectively, can cover a growing number of properties. Our aim is to assist, on the one hand, the V&amp;V process by reducing the burden of choosing the methodology tailored to a given verification problem, and on the other hand the tools developers by factorizing useful features - visualization, report generation, property description - in one platform. CAISAR will soon be available at https://git.frama-c.com/pub/caisar.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;formal verification</kwd>
        <kwd>platform</kwd>
        <kwd>formal methods</kwd>
        <kwd>testing</kwd>
        <kwd>metamorphic testing</kwd>
        <kwd>reachability analysis</kwd>
        <kwd>satisfaction modulo theory</kwd>
        <kwd>artificial intelligence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>graded or abandoned. These tools use different techniques,
different input formats, handle different ML artefacts and,
The integration of machine learning programs as compo- most importantly, have varying performances depending
nents of critical systems is said to be bound to happen; on the problems. Our goal of orchestrating multiple tools
initiatives from various private and govermental actors aims at maximizing the property coverage in the context
(e.g., US’ NSF funding for Trustworthy AI1, France’s of a validation and verification process.
Grand Défi IA de confiance 2) are a consequence of that To this end, we present a platform dedicated to
Charfact. Trusting such programs is thus becoming a crucial acterizing Artificial Intelligence Safety and Robustness
issue, both on technical and ethical sides. (CAISAR) that aims to unify several formal methods and</p>
      <p>A possible approach to trust is formal test and verifica- tools, at the input, through the use of a mature and
exprestion, a broad set of techniques and tools that have been sive property description and proof orchestration language,
applied to software safety for several decades. These for- at the output, through the factorization of features such
mal methods build on sound mathematical foundations as visualization and report generation, and at the usage,
to assess the behaviour of programs in a principled way, through shared heuristics and interconnections between
be it for generating tests or providing proven guarantees. tools.</p>
      <p>For the last few years, several independent works have The answer to the question “To what extent can I trust
started to investigate the possible applications of formal my machine learning program?” has many components,
verification to machine learning program verification and ranging from data analysis to decision explainability. One
its limitations. This led to what could be characterized as such important components is dealing with verification
a Cambrian explosion of tools aiming to solve a particular and validation, and we wish to make CAISAR an
imsubset of the machine learning verification field. In less portant element in the safety toolbelt by covering these
than vfie years, more than 20 tools were produced, up- applications.</p>
      <p>In the following, we will first present the design
principles of CAISAR and state its main goals. We will follow
by a description of its most prominent features, as well
as its limitations. We will then explain the position of
CAISAR regarding other tools for formal verification of
machine learning programs, and conclude by presenting
some future work and possible research problems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Core principles of CAISAR</title>
      <p>
        confidence on the studied system. In order for users to
trust CAISAR as well, it needs to rely on well-known and
The aim for CAISAR is to provide a verification envi- approved principles and technologies. It is developed in
ronment for Artificial Intelligence (AI) based systems OCaml, a strongly typed programming language, used
tailored to different needs. The profusion of tools for AI- to develop tools for program verification and validation.
programs certification offers numerous possibilities, from Such tools include Coq [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a proof assistant that for
inthe choice of technology (formal methods, test generation) stance used to develop CompCert[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a C compiler that
to the scope of properties to check (coverage, suitability to is guaranteed to output C-ANSI compliant source code,
a given distribution, robustness). However, with increased Frama-C [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], a platform for the static analysis of C code,
possibilites comes the burden of choice. Which method and Why3 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a platform for deductive verification of
better suits a given use case? Are the results provided programs.
by this particular method trustworthy enough? How to
bring trust in the process of selecting, tailoring and
computing results of a given tool? How to evaluate a given 2.3. Tools composition
tool against others? Those are the questions that we aim
to answer with CAISAR.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Compatibility with existing and future methods</title>
        <p>The first principle of CAISAR is to ease this burden of
choice by automating parts of it. CAISAR aims to provide
a unique interface for vastly different tools, with a single
entry point to specify verification goals. Choosing which
tool to use is an informed decision that may not be relevant
for the user; the goal is to provide an actionable answer on
the safety of the system, by using whatever tool is suitable
for the problem. Ideally, the user should not be bothered
with deciding which tool is suitable for their use case:
CAISAR will automatically figure out how to express the
given property to suit verifiers. As AI systems pipelines
are becoming more and more complex, it is crucial for
CAISAR to handle this complexity. Currently, CAISAR
supports neural networks and support vector machines,
and an industrial benchmark of an ensemble model
(NNSVM), which we are unable to further discuss, is being
used as a concrete real-word use-case.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Common modelling and answers</title>
        <p>Existing verifiers rely on different decision procedures,
e.g., Mixed Integer Linear Programming (MILP), abstract
interpretation, or Satisfaction Modulo Theory (SMT)
calculus. Modelling a verification problem using these
frameworks require different skills and is time-consuming; for
instance, some modelling choices made for MILP may not
be applied under SMT. Moreover, even if one succeeds in
phrasing a verification problem under multiple decision
procedures, the different results may not be immediately
comparable.</p>
        <p>
          CAISAR aims to provide a common ground for inputs
and outputs, which will lead to an easier comparison,
lower time consumption and an informed decision.
Furthermore, collecting and presenting the user with multiple
answers from different techniques can provide additional
Some works are starting to combine multiple
techniques [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] for their analysis, using an exact MILP solver
to refine bounds obtained by abstract interpretation. Our
goal with CAISAR is to bring tool composition to
another level. For instance, metamorphic transformations
could generate different input space partitions for formal
verifiers. A reachability analysis tool could be called
numerous times with tighter bounds until reaching a precise
enough answer. Coverage testing objectives could be
extracted from reachability analysis tools and fed to test
generators. CAISAR will be more than the sum of its part,
allowing communication between vastly different tools to
provide faster and more accurate answers.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.4. Automatic proposal of verification strategies</title>
        <p>A long-term goal for CAISAR is to provide a
reasoning engine where past verification problems processed
by CAISAR can inform next ones, gradually building a
knowledge base that is suitable for the specific needs of
the user. CAISAR will also implement its own built-in
heuristics to supplement specialized programs that do not
implement them.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Architecture and features</title>
      <p>CAISAR’s architecture can be divided into the following
functional blocks:</p>
      <sec id="sec-3-1">
        <title>1. A Common specification Language (CL)</title>
        <p>2. A Proof Obligation Generator (POG), associated
with a Dispatcher (DISP)
3. An Intermediate Representation (IR)
4. A visualization module (VIZ)</p>
      </sec>
      <sec id="sec-3-2">
        <title>See fig. 1 for a visual depiction of dependencies between blocks.</title>
        <sec id="sec-3-2-1">
          <title>3.1. Specification language and verification predicates</title>
          <p>
            Ada programs [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. This global expressiveness and safety
allows to write  once and for all, independently of the
verifier. For instance, Figure 2 shows the definition of
robustness against a perturbation of amplitude  using the
∞ distance within CAISAR’s standard library, and fig. 3
the WhyML file the user need to write in order to verify
the robustness of a given TestSVM against a perturbation
of amplitude 0.5. Note that all the necessary element to
define , namely  ,  ,  and , are defined in those
ifles:  is the function TestSVM, and  is the predicate
itself. Note that WhyML is not limited to the robustness
against a given perturbation property, often met in the
literature. For instance, asserting that a neural network
respect the properties of being differentialy private [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]
or respecting causal fairness [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] is something that could
be phrased as WhyML programs, since those properties
have a mathematical characterization. Finally, WhyML
does not constrain the form of  nor . In particular, it is
possible to define multiple verification goals in the same
, opening the way to subdivide it into subproblems, and
providing answers at each step.
          </p>
          <p>CAISAR then automatically translates  into a format
supported by the selected verifiers, through a succession
of encoding transformations and simplifications. For
instance, some verifiers are best used when trying to falsify
the property: instead of checking
they instead try to satisfy the negation
∀ ∈  , () ⇒ ( (), )
∃ ∈  , () ∧ ¬( (), )
A typical task for program verification involves to solve This transformation is embedded in CAISAR, when
calla verification problem . A verification problem consists ing Marabou: a verification problem  can be
transon checking that a program with a given set of inputs is formed into an equivalent one ′ that can be dispatched
meeting certain expectations on its outputs. More for- to Marabou.
mally, let  be an input space, (· ) be a property on the
input space,  be a program, and (· , · ) be a property 3.2. Proof Obligations Generations for
on the output space. By property, we mean a statement various tools
that describes a desirable behaviour for the program. Let
 = ( , , , ) be a verification problem . The goal is
to verify the following property:</p>
          <p>
            ∀ ∈  , () ⇒ ( (), )
To write , one needs to be able to express concisely and
without ambiguity each component: the program to verify
 , the properties  and , and the dataset  . To this end,
CAISAR provides full support for the WhyML
specification and programming language [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. WhyML is a
language with static strong typing, pattern matching, types
invariants and inductive predicates. This gives WhyML
programs a sound semantic as logical propositions. WhyML
is at the core of the Why3 verification platform, and has
been used as an intermediate language for verification of
CAISAR currently supports a variety of tools and
techniques: metamorphic testing, reachability analysis based
on abstract interpretation and constraint-based
propagations. CAISAR can analyze neural networks and support
vector machines. This versatility allows for CAISAR to
verify system components using different machine
learning architectures.
          </p>
          <p>
            Marabou
Marabou [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ] is a deep neural network verification
complete verifier. Its core routine relies on a modified
simplex algorithm that lazily relaxes constraints on piecewise
linear activation functions. Marabou also makes use of
several heuristics that help speeding up the verification
procedure, like relying on tight convex
overapproximations [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] or sound overapproximations [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]. It can
answer reachability and conjunction of linear constraint
queries. Marabou ranked fifth at the VNN-COMP 2021.
          </p>
          <p>It is currently in active development.</p>
          <p>
            SAVer
use TestSVM . SVMasArray
use i e e e _ f l o a t . F l o a t 6 4
use c a i s a r .SVM
g o a l G: f o r a l l a : i n p u t _ t y p e .
r o b u s t _ t o svm_apply a ( 0 . 5 : t )
The Support Vector Machine reachability analysis tool Figure 3: Example verification problem specified to
SAVer [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] is specialized in the verification of support CAISAR. The program to verify is TestSVM, the input
vector machines (SVM), a popular machine learning algo- space is defined by the elements in a, the output space is
rithm used alongside neural networks for classification or the result of the application of the function svm_apply.
regression tasks. SAVer can answer reachability queries,
and supports a variety of SVM configurations. This tool
was selected for support as, to the best of our knowledge, the VNNLIB standard, used in the VNN-COMP, uses a
it is the first one to deal with verification of SVM. subset of SMTLIB2, which paves the way for the support
of future tools in CAISAR.
          </p>
          <p>
            Alt-Ergo and SMTLIB compliant solvers
Existing general purpose SMT solvers for program
verification like Alt-Ergo [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] or Z3 [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ] all support a standard
input language, SMTLIB [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ]. CAISAR leverages Why3
existing support for SMT solvers and can translate neural
network control flows directly into SMTLIB compliant
strings using its intermediate representation, which allows
the support of a variety of off-the-shelf solvers. Note that
type i n p u t _ t y p e = i n t − &gt; t
type o u t p u t _ t y p e = i n t
type model = {
app :
i n p u t _ t y p e − &gt; o u t p u t _ t y p e ;
num_input : i n t ;
num_classes : i n t
}
p r e d i c a t e d i s t _ l i n f
( a : i n p u t _ t y p e )
( b : i n p u t _ t y p e )
( eps : t )
( n : i n t ) =
f o r a l l i . 0 &lt;= i &lt; n − &gt;
.− eps . &lt; a i .− b i . &lt; eps
          </p>
          <p>
            Python Reachability Assessment Tool
The Python Reachability Assessment Tool (PyRAT) is a
static analyzer targeting specifically neural networks. It
builds upon the framework of abstract interpretation [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]
using abstract domains adapted for the approximation
of the reachable space in a neural network. Three main
domains are used: intervals with symbolic relations as
described in [
            <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
            ], zonotopes [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] and Deep poly
domain [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
          <p>
            For low dimensional inputs, PyRAT use input
partitioning as described in [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ], with heuristics tailored to
relational domains: the zonotope domain and the
deeppoly domain with backsubstitution. Those heuristics allow
the computation of a non-trivial (e.g., not just widest
interval first) score ranking the inputs by their estimated
influence on the outputs. PyRAT has comparable results
to state-of-the-art analyzers on the widely used
ACASXu [20] benchmark, and outperforms the similar domains
of ERAN on S-shape activations functions such as the
sigmoid or hyperbolic tangent functions with specific
approximations.
          </p>
          <p>AIMOS: a Metamorphic testing utility
AI Metamorphism Observing Software (AIMOS) is a
p r e d i c a t e r o b u s t _ t o software developped at the same time as CAISAR, aiming
( model : model ) to provide metamorphic properties testing or perturbations
( a : i n p u t _ t y p e ) on a dataset for a given AI model. Metamorphic testing
(f eoprsa:l l t b) . = d i s t _ l i n f a b eps is a testing technique relying on properties symmetries
model . num_input − &gt; and invariance on the operating domain. See [21] for a
model . app a = model . app b comprehensive survey on this approach. AIMOS offers
tools to derive properties from a set of transformations on
the inputs: given , ,  and a transformation function
Figure 2: An example of a predicate in CAISAR’s stan-  :  ∈  ↦→  , it generates a set of new properies ′
dard library: being “robust to” against a perturbation of that are coherent with the transformation. As an example,
amplitude . Here, the predicate defines (). a symmetry on the inputs of a classification model could
result on a symmetry on the outputs; AIMOS would then 3.4. Answer composition
automatically modify the property to check against the
symmetrical labels. CAISAR currently offer two ways to compose verifiers.</p>
          <p>AIMOS can generate test cases scenarios from the First, CAISAR can launch several solvers on the same task
most common input transformations (geometrical rota- and compose their answer: it can then provide a summary
tions, noise addition); others can be added if necessary. stating which solver succeeded and which one failed.
SecAIMOS was evaluated on a metamorphic property on the ond, CAISAR has the ability to verify pipelines that are
ACAS-Xu benchmark. The aim of the property was to composed of several machine learning programs: for
inevaluate the ability of neural networks trained on ACAS stance, a pipeline composed of several neural networks, or
to generalize with symmetric inputs. Given a symmetry a neural network which outputs are processed by a SVM.
on inputs, AIMOS generates the expected symmetrical CAISAR can be used to state an overall verification goal,
output, and tests models against the base and symmetrical and to model that the outputs of a block of the pipeline
outputs. See table 1 for results. AIMOS was able to show are the inputs of another block. More advanced methods
that neural networks trained on ACAS have a low, but of composition, such as automatic subgoals generation
noticeable sensitivity to symmetry on one input. or refinement by multiple analysis constitute a promising
research venue.</p>
          <p>
            The most similar work to CAISAR is the DNNV plat- instance out-of-distribution detection, is another future
form [
            <xref ref-type="bibr" rid="ref23">34</xref>
            ]. As CAISAR, DNNV provides support to var- work.
ious state-of-the-art verifiers. It similarly aims to be a Finally, to further help the user to select the optimal set
hub for neural network verification by supporting a wide of tools for its verification problem, a long-term goal of
range of input and output formats, and by providing a mod- CAISAR is to provide a verification helper to design
optielling language for properties specification and discharge mal queries for verification problems based on previous
to capable provers. Their Domain Specific Language, runs.
          </p>
          <p>DNNP, is built on Python; while CAISAR’s
specification language, WhyML, is already used in several formal
verification platforms and provide additional theoretical References
guarantees, which is a key component to provide trust. As
stated before, WhyML allows specifying multiple
verification goals in the same verification problem, which helps
modelling more complex use cases.</p>
          <p>The main difference between CAISAR and DNNV is
that the latter does not combine verifiers answers, that
is to say there is (at the time of writing) no feature that
aims to interoperate verifiers: from the DNNV
documentation5: "DNNV standardizes the network and property
input formats to enable multiple verification tools to run
on a single network and property. This facilitates both
verifier comparison, and artifact re-use." As verifiers are
becoming more and more sophisticated and specialized,
combination of methods will become even more fruitful,
and we expect this to be a key difference with DNNV.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion &amp; future works</title>
      <p>As the field of machine learning verification is blooming,
choosing the right tool for the right verification problem
becomes more and more tedious. We presented CAISAR,
a platform aimed to alleviate this difficulty by presenting
a single, extensible entry point to machine learning
veriifcation problem modelling and solving. Plenty of work
still needs to be done, however.</p>
      <p>
        Altough CAISAR already integrates some
state-of-theart tools, other verifiers that ranked high in the
VNNCOMP are on the way of integration. Such verifiers
include ,  -CROWN [
        <xref ref-type="bibr" rid="ref24">23, 35</xref>
        ], who scored first on said
competition.
      </p>
      <p>
        Another research venue would be the integration of
neural network reparation techniques such as [
        <xref ref-type="bibr" rid="ref25">36</xref>
        ].
Corrective techniques would contribute to provide a feedback
loop composed of problem specification, verification, fault
identification and correction proposal.
      </p>
      <p>
        Various problem splitting heuristics based, for instance,
on [
        <xref ref-type="bibr" rid="ref26 ref27">37, 38</xref>
        ] could be integrated into CAISAR to leverage
parallelism for verifiers that do not support them
      </p>
      <p>Data is the cornerstone of modern machine learning
systems, and it is necessary to give tools to handle its
complexity. Support for more various data kinds, such as time
series, is a first step towards this direction. Integration of
tools for analyzing data in relation with a program, for</p>
      <sec id="sec-4-1">
        <title>5https://docs.dnnv.org/en/stable/</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Huet</surname>
          </string-name>
          , G. Kahn,
          <string-name>
            <given-names>C.</given-names>
            <surname>Paulin-Mohring</surname>
          </string-name>
          ,
          <source>The Coq Proof Assistant : A Tutorial : Version 7.2, Research Report RT-0256</source>
          , INRIA,
          <year>2002</year>
          . URL: https://hal. inria.fr/inria-00069918, projet COQ.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Leroy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Blazy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kästner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schommer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ferdinand</surname>
          </string-name>
          ,
          <article-title>Compcert-a formally veriifed optimizing compiler</article-title>
          ,
          <source>in: ERTS 2016: Embedded Real Time Software and Systems, 8th European Congress</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Baudin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bobot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bühler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Correnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kosmatov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maroneze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Perrelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Prevosto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Signoles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <article-title>The dogged pursuit of bug-free C programs: The Frama-C software analysis platform</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>64</volume>
          (
          <year>2021</year>
          )
          <fpage>56</fpage>
          -
          <lpage>68</lpage>
          . doi:
          <volume>10</volume>
          .1145/3470569.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.-C.</given-names>
            <surname>Filliâtre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paskevich</surname>
          </string-name>
          ,
          <fpage>Why3</fpage>
          - Where Programs Meet Provers, in: M.
          <string-name>
            <surname>Felleisen</surname>
          </string-name>
          , P. Gardner (Eds.),
          <source>Programming Languages and Systems, Lecture Notes in Computer Science</source>
          , Springer, Berlin, Heidelberg,
          <year>2013</year>
          , pp.
          <fpage>125</fpage>
          -
          <lpage>128</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>642</fpage>
          -37036-
          <issue>6</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gehr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Püschel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vechev</surname>
          </string-name>
          ,
          <article-title>An abstract domain for certifying neural networks</article-title>
          ,
          <source>Proceedings of the ACM on Programming Languages</source>
          <volume>3</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Guitton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kanig</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>Moy, why hi-lite ada</article-title>
          ,
          <source>Rustan</source>
          , et al.[
          <volume>32</volume>
          ] (
          <year>2011</year>
          )
          <fpage>27</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>McMahan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Mironov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talwar</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Zhang,</surname>
          </string-name>
          <article-title>Deep Learning with Differential Privacy</article-title>
          ,
          <source>in: 23rd ACM Conference on Computer and Communications Security (ACM CCS)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>308</fpage>
          -
          <lpage>318</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Urban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Christakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Wüstholz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <source>Perfectly Parallel Fairness Certification of Neural Networks</source>
          , arXiv:
          <year>1912</year>
          .02499 [cs] (
          <year>2019</year>
          ). arXiv:
          <year>1912</year>
          .02499.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Katz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ibeling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Julian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lazarus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thakoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Zeljic´,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Dill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Kochenderfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <article-title>The Marabou Framework for Verification and Analysis of Deep Neural Networks</article-title>
          , in: I.
          <string-name>
            <surname>Dillig</surname>
          </string-name>
          , S. Tasiran (Eds.),
          <source>Computer Aided Verification</source>
          , volume
          <volume>11561</volume>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>443</fpage>
          -
          <lpage>452</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Zeljic´, G. Katz,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <string-name>
            <surname>Efficient Neu- K. Grauman</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Cesa-Bianchi</surname>
          </string-name>
          , R. Garnett (Eds.),
          <article-title>ral Network Analysis with Sum-of-</article-title>
          <string-name>
            <surname>Infeasibilities</surname>
          </string-name>
          , Advances in Neural Information Processing in: D.
          <string-name>
            <surname>Fisman</surname>
          </string-name>
          , G. Rosu (Eds.),
          <source>Tools and Algo- Systems</source>
          <volume>31</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2018</year>
          , pp.
          <source>rithms for the Construction and Analysis of Sys- 10802-10813</source>
          . URL: http://papers.nips.cc/paper/ tems, Lecture Notes in Computer Science, Springer 8278-
          <string-name>
            <surname>fast-</surname>
          </string-name>
          and
          <article-title>-effective-robustness-certification</article-title>
          .
          <source>International Publishing, Cham</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>163</lpage>
          . pdf. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -99524-
          <issue>9</issue>
          _
          <fpage>8</fpage>
          . [20]
          <string-name>
            <given-names>G.</given-names>
            <surname>Manfredi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jestin</surname>
          </string-name>
          , An introduction to ACAS
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ostrovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. W.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Katz,</surname>
          </string-name>
          <article-title>An Xu and the challenges ahead, in: IEEE/AIAA Digabstraction-refinement approach to verifying convo- ital Avionics Systems Conference (DASC), Sacralutional neural networks</article-title>
          ,
          <source>CoRR abs/2201</source>
          .01978 mento, CA, USA,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1109/DASC. (
          <year>2022</year>
          ). URL: https://arxiv.org/abs/2201.
          <year>01978</year>
          .
          <year>2016</year>
          .
          <volume>7778055</volume>
          . arXiv:
          <fpage>2201</fpage>
          .
          <year>01978</year>
          . [21]
          <string-name>
            <given-names>T. Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.-C.</given-names>
            <surname>Kuo</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <given-names>P.-L.</given-names>
            <surname>Poon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Towey</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ranzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zanella</surname>
          </string-name>
          , Robustness verification of T. Tse,
          <string-name>
            <given-names>Z. Q.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Metamorphic testing: A review support vector machines, in: International Static of challenges and opportunities</article-title>
          ,
          <source>ACM Computing Analysis Symposium</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>271</fpage>
          -
          <lpage>295</lpage>
          . Surveys (CSUR)
          <volume>51</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Conchon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coquereau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iguernlala</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Meb- [22]
          <string-name>
            <given-names>G.</given-names>
            <surname>Katz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Julian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kochensout</surname>
          </string-name>
          ,
          <article-title>Alt-Ergo 2.2</article-title>
          , in: SMT Workshop: Inter- derfer, Reluplex:
          <article-title>An efficient smt solver for national Workshop on Satisfiability Modulo Theo- verifying deep neural networks</article-title>
          ,
          <source>arXiv preprint ries</source>
          , Oxford, United Kingdom,
          <year>2018</year>
          . URL: https: arXiv:
          <fpage>1702</fpage>
          .01135 (
          <year>2017</year>
          ). //hal.inria.fr/hal-01960203. [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jana</surname>
          </string-name>
          , C.-J.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>L. de Moura</surname>
          </string-name>
          , N. Bjørner,
          <article-title>Z3: An Efficient SMT Hsieh</article-title>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Kolter</surname>
          </string-name>
          , Beta-CROWN:
          <article-title>Efficient Bound Solver</article-title>
          , in: C. R. Ramakrishnan, J. Rehof (Eds.),
          <article-title>Propagation with Per-neuron Split Constraints for Tools and Algorithms for the Construction and Anal- Complete and Incomplete Neural Network Robustysis of Systems</article-title>
          , Lecture Notes in Computer Science, ness Verification, ???? URL: http://arxiv.org/abs/ Springer, Berlin, Heidelberg,
          <year>2008</year>
          , pp.
          <fpage>337</fpage>
          -
          <lpage>340</lpage>
          .
          <fpage>2103</fpage>
          .06624. arXiv:
          <volume>2103</volume>
          .06624. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -78800-3_
          <fpage>24</fpage>
          . [24]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , K.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          , C.-J.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fontaine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tinelli</surname>
          </string-name>
          , The Sat- Hsieh,
          <article-title>Robustness Verification for Transformers, in: isfiability Modulo Theories Library (SMT-LIB)</article-title>
          ,
          <source>International Conference on Learning Representawww.SMT-LIB.org</source>
          ,
          <year>2016</year>
          . tions,
          <year>2020</year>
          . URL: https://openreview.net/forum?id=
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <article-title>Abstract interpretation: A BJxwPJHFwS. unified lattice model for static analysis of programs</article-title>
          [25]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bak</surname>
          </string-name>
          ,
          <article-title>Nnenum: Verification of ReLU Neural by construction or approximation of fixpoints, in: Networks with Optimized Abstraction Refinement</article-title>
          ,
          <string-name>
            <surname>POPL</surname>
          </string-name>
          ,
          <year>1977</year>
          , pp.
          <fpage>238</fpage>
          -
          <lpage>252</lpage>
          . in: A.
          <string-name>
            <surname>Dutle</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Moscato</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Titolo</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Muñoz, I. Perez (Eds.),
          <source>NASA Formal Methods, Analyzing deep neural networks with symbolic prop- Lecture Notes in Computer Science</source>
          , Springer Inagation:
          <article-title>Towards higher precision and faster ver-</article-title>
          ternational
          <string-name>
            <surname>Publishing</surname>
          </string-name>
          , Cham,
          <year>2021</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>36</lpage>
          . ification, in: B. E. Chang (Ed.), Static Analy- doi:10.1007/978-3-
          <fpage>030</fpage>
          -76384-
          <issue>8</issue>
          _2. sis - 26th
          <source>International Symposium, SAS</source>
          <year>2019</year>
          , [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Henriksen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          ,
          <string-name>
            <surname>Efcfiient Neural</surname>
            <given-names>NetPorto</given-names>
          </string-name>
          , Portugal, October 8-
          <issue>11</issue>
          ,
          <year>2019</year>
          , Proceedings,
          <source>work Verification via Adaptive Refinement and Advolume 11822 of Lecture Notes in Computer Sci- versarial Search</source>
          , in: 24th European Conference ence, Springer,
          <year>2019</year>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>319</lpage>
          . URL: https: on
          <source>Artificial Intelligence - ECAI</source>
          <year>2020</year>
          , Santiago de //doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -32304-2_
          <fpage>15</fpage>
          . doi:10.
          <string-name>
            <surname>Compostela</surname>
          </string-name>
          , Spain,
          <year>2020</year>
          , p.
          <fpage>8</fpage>
          .
          <issue>1007</issue>
          /978-3-
          <fpage>030</fpage>
          -32304-2\_
          <fpage>15</fpage>
          . [27]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dutta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sanakaranarayanan</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Ti-
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Whitehouse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S. Jana, wari,
          <source>Output Range Analysis for Deep Neural Efficient Formal Safety Analysis of Neural Net- Networks</source>
          , arXiv:
          <fpage>1709</fpage>
          .09130 [cs, stat] (
          <year>2017</year>
          ). works, in: S. Bengio,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          , H. Larochelle, arXiv:
          <fpage>1709</fpage>
          .09130.
          <string-name>
            <surname>K. Grauman</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Cesa-Bianchi</surname>
          </string-name>
          , R. Garnett (Eds.), [28]
          <string-name>
            <surname>A. D. Palma</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Bunel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Desmaison</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>DviAdvances in Neural Information Processing jotham</article-title>
          , P. Kohli,
          <string-name>
            <given-names>P. H. S.</given-names>
            <surname>Torr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , Systems 31,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2018</year>
          ,
          <article-title>Improved branch and bound for neural network</article-title>
          pp.
          <fpage>6367</fpage>
          -
          <lpage>6377</lpage>
          . URL: http://papers.nips.cc/paper/ verification via lagrangian decomposition (????).
          <fpage>7873</fpage>
          <article-title>-efficient-formal-safety-analysis-of-neural-networks</article-title>
          .
          <source>arXiv:2104</source>
          .06718v1. pdf. [29]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ehlers</surname>
          </string-name>
          , Formal Verification of Piece-Wise Linear
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gehr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Püschel</surname>
          </string-name>
          ,
          <string-name>
            <surname>Feed-Forward Neural</surname>
            <given-names>Networks</given-names>
          </string-name>
          , arXiv:
          <fpage>1705</fpage>
          .01320
          <string-name>
            <given-names>M.</given-names>
            <surname>Vechev</surname>
          </string-name>
          , Fast and
          <string-name>
            <given-names>Effective</given-names>
            <surname>Robustness</surname>
          </string-name>
          Certifi- [cs] (
          <year>2017</year>
          ). arXiv:
          <volume>1705</volume>
          .01320. cation, in: S. Bengio,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          , H. Larochelle, [30]
          <string-name>
            <given-names>C.</given-names>
            <surname>Urban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miné</surname>
          </string-name>
          ,
          <article-title>A Review of Formal Methods applied to Machine Learning</article-title>
          ,
          <source>arXiv:2104</source>
          .02466 [cs] (
          <year>2021</year>
          ). arXiv:
          <volume>2104</volume>
          .
          <fpage>02466</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Arnon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lazarus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Kochenderfer</surname>
          </string-name>
          ,
          <article-title>Algorithms for Verifying Deep Neural Networks</article-title>
          , arXiv:
          <year>1903</year>
          .06758 [cs, stat] (
          <year>2019</year>
          ). arXiv:
          <year>1903</year>
          .06758.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>R.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xue</surname>
          </string-name>
          , H. Hermanns,
          <article-title>PRODeep: a platform for robustness verification of deep neural networks</article-title>
          ,
          <source>in: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering</source>
          , ESEC/FSE 2020,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2020</year>
          , pp.
          <fpage>1630</fpage>
          -
          <lpage>1634</lpage>
          . URL: https://doi.org/10.1145/3368089. 3417918. doi:
          <volume>10</volume>
          .1145/3368089.3417918.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Balunovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gehr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vechev</surname>
          </string-name>
          ,
          <source>Certifying Geometric Robustness of Neural Networks, in: Advances in Neural Information Processing Systems</source>
          , volume
          <volume>32</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2019</year>
          . URL: https://papers.nips.cc/paper/2019/hash/ f7fa6aca028e7ff4ef62d75ed025fe76-Abstract. html.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shriver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Elbaum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Dwyer</surname>
          </string-name>
          ,
          <article-title>DNNV: A Framework for Deep Neural Network Verification</article-title>
          , in: A.
          <string-name>
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. R. M. Leino</surname>
          </string-name>
          (Eds.),
          <source>Computer Aided Verification, Lecture Notes in Computer Science</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>150</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>030</fpage>
          -81685-
          <issue>8</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>K.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-J. Hsieh</surname>
          </string-name>
          ,
          <article-title>Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers</article-title>
          , arXiv:
          <year>2011</year>
          .13824 [cs] (
          <year>2021</year>
          ). URL: http://arxiv. org/abs/
          <year>2011</year>
          .13824, arXiv:
          <year>2011</year>
          .13824.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>B.</given-names>
            <surname>Goldberger</surname>
          </string-name>
          , G. Katz,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Adi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Keshet</surname>
          </string-name>
          ,
          <article-title>Minimal modifications of deep neural networks using veriifcation</article-title>
          , in: E.
          <string-name>
            <surname>Albert</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          Kovacs (Eds.),
          <source>LPAR23. LPAR-23: 23rd International Conference on Logic for Programming</source>
          ,
          <source>Artificial Intelligence and Reasoning</source>
          , volume
          <volume>73</volume>
          of EPiC Series in Computing, EasyChair,
          <year>2020</year>
          , pp.
          <fpage>260</fpage>
          -
          <lpage>278</lpage>
          . doi:
          <volume>10</volume>
          .29007/699q.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>J.</given-names>
            <surname>Girard-Satabin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Varasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schoenauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Charpiat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chihani</surname>
          </string-name>
          , DISCO:
          <article-title>Division of input space into convex polytopes for neural network verification</article-title>
          ,
          <source>JFLA</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bunel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Turkaslan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Torr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Branch and bound for piecewise linear neural network verification</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>