<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.4108/icst.valuetools.2013.254377</article-id>
      <title-group>
        <article-title>On Optimizing Simulation-Based Verification of Cyber-Physical Systems via Statistical Model Checking: a Preliminary Work</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Leonardo Picchiami</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, Sapienza University of Rome</institution>
          ,
          <addr-line>via Salaria 113, 00198</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>6806</volume>
      <fpage>28</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>Exhaustive simulation-based verification of safety/mission-critical systems often requires unviable overall completion times due to the unmanageable number of operational scenarios to be considered. Statistical Model Checking (SMC), instead, entails randomly evaluating scenarios deemed of interest until statistical guarantees are achieved. In this short paper, we gather on estimating expected value approximations of system-level properties through Adaptive Stopping Algorithms (SAs), a particular class of SMC that learns the number of scenarios to be considered. Our analysis shows that determining upfront the algorithm that will require the fewest samples for the verification task at hand would be greatly beneficial, as it would allow significant reductions in the overall time. However, although qualitative criteria have been proposed to guide the choice of the right algorithm, its actual performance may depend on unknown information (e.g., the actual value to be estimated).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Formal verification</kwd>
        <kwd>Cyber-Physical Systems</kwd>
        <kwd>Statistical Model Checking</kwd>
        <kwd>Adaptive Stopping Algorithms</kwd>
        <kwd>Monte Carlo estimation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The ever-increasing deployment of industrial systems in safety/mission critical domains, such as smart
grids [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ], automotive [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], healthcare [
        <xref ref-type="bibr" rid="ref7">7, 8, 9, 10, 11, 12, 13, 14</xref>
        ], or avionics [15, 16, 17], among
many others, exasperates the need for intelligent strategies to certify such systems as Quality Guaranteed
(QG) systems [18]. Industrial systems are typically modelled as Cyber-Physical Systems (CPSs) [19], i.e.,
systems integrating a continuous physical part, e.g., interconnected hardware, with a discrete cyber part,
i.e., the software part. In many CPSs, the software part consists of a controller that continuously senses
the state of the system and sends commands to the hardware part to meet system-level specifications
when the system operates in its operational environment, i.e., in presence of inputs and/or additional
uncontrolled events (such as faults, noise signals, collectively referred to as disturbances).
      </p>
      <p>The complexity of industry-level safety/mission critical systems typically makes white-box
modelbased approaches such as symbolic techniques, logic or automata (e.g., those discussed in [20, 21, 22,
23, 24, 25, 26, 27, 28, 29, 30]) unviable. In such a context, the system model simulation is the only
viable alternative for evaluating the system’s behaviour in its environment. It is usually supported by
of-the-shelf design tools ( e.g., Modelica or Simulink) that enable the mathematical modelling of the
physical part of the system through Ordinary Diferential Equations (ODEs).</p>
      <p>Among the limitations of the simulation-based verification of CPSs, verifying the system under all
time sequences of disturbances which can possibly materialize (operational scenarios) easily leads to
years of experimental campaigns (e.g., see [31, 32]) and it is a key obstacle to overcome.</p>
      <p>Statistical Model Checking (SMC), instead, is a Monte Carlo that evaluates randomly-chosen
operational scenarios until the desired statistical guarantees are obtained. In particular, we focus on</p>
      <p>Adaptive Stopping Algorithms (SAs), which compute (,  )-approximations of the expected value of
given Key Performance Indicators (KPIs) using as few as possible i.i.d. samples. An (,  )-approximation
guarantees that, with probability at least 1 −  , the relative error on the expected value of the system
KPI is bounded by  .</p>
      <p>SAs do not fix the number of required samples in advance but continuously monitor their own
progress and gradually ask for new samples until enough knowledge has been accumulated to compute
an approximation of the quantity of interest with the sought statistical properties. In other words, SAs
are Artificial Intelligence (AI) algorithms that continuously learn when enough samples have been
processed to output the estimate according to complex stopping criteria. The fact that such stopping
criteria depend on the KPI distribution and take into account the progress of the algorithm is necessary
but is not enough in itself to guarantee that the final sample size will not be larger than necessary. This
must be proven by the statistical result from which the stopping criterion has been derived, hence the
dificulty in coming up with new and useful SAs. On such a basis, we consider two state-of-the-art
SAs: the Approximation Algorithm () [33] and the Empirical Bernstein Stopping (EBStop) [34]. These
algorithms are (quasi-) optimal, i.e., they are probabilistically guaranteed to terminate after a number
of samples within a constant factor from the minimum number of samples theoretically needed to
estimate the quantity of interest. In particular,  is well-known to be optimal, whereas EBStop is
quasi-optimal in that sense.</p>
      <p>Given the time required to simulate each scenario (from several seconds to minutes) and the huge
number of scenarios expected to be needed for these algorithms to converge, selecting the SA requiring
the least number of samples for the verification at hand may have a significant impact on the overall
completion time, even when parallelization of system model simulations is in place. However, although
qualitative criteria have been proposed to guide the choice of the right algorithm, its actual performance
may depend on unknown information (e.g., the actual value to be estimated). Hence, this paper explores
the benefits and performance gain we would obtain if the right algorithm can be actually determined.</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the art</title>
      <p>Statistical Model Checking [35, 36] comprises a set of Monte Carlo–based methods to compute formal (
quality-) guarantees for a variety of purposes such as formal verification [ 37], optimization of control
strategies [38] and AI-search algorithms [39, 18]. In the context of CPSs, white-box [40] as well as
black-box [41, 42] methodologies based on system modelling formalisms (e.g., see [43, 44, 45]) have
been developed to check satisfaction of formal specifications [ 46] in many application domains, e.g.,
biological systems [47, 48], automotive [49, 50] or smart grids [51], among others.</p>
      <p>Model-based formal verification via SMC can be performed either through of-the-shelf SMC tools
(see, e.g., MultiVeSta [52], PRISM [53], UPPAAL-SMC [54], COSMOS [55], Ymer [56]), or by interfacing
simulator- and model-agnostic algorithms (see, e.g., [57]) to existing commercial (e.g., Dymola, Simulink,
OpalRT) and non-commercial simulators (e.g., OpenModelica), which makes simulable CPS models
available only through such tools (i.e., black-box) [58]. In general, SMC-driven Monte Carlo methods
hold the promise to reduce the time and cost with respect to simulation campaigns that exhaustively
encompass all operational scenarios deemed of interest [59, 60, 61]. Clearly, scaling-up the verification
time can be addressed either by improving or developing new algorithms and approaches [62, 63] or
designing highly parallel architectures safely distributing simulations among several cores (possibly)
over diferent machines (see, e.g., [64, 65, 66]).</p>
    </sec>
    <sec id="sec-3">
      <title>3. Formal Framework</title>
      <p>In this section we introduce the mathematical background and the problem statement. We denote with
R, R0+, and R+, respectively, the sets of all, non-negative and positive reals. Given sets  an , 
denotes the set of functions from  to .</p>
      <sec id="sec-3-1">
        <title>3.1. System Under Verification, Environment and Key Performance Indicators</title>
        <p>In our setting, a (black-box) CPS model is described by three connected components: the System Under
Verification (SUV), a stochastic environment and the KPI of interest.</p>
        <p>In the following, we treat the SUV as a continuous- or discrete-time black-box input-output
deterministic causal dynamical system (see, e.g., [67, 68, 69]) that takes as input an operational scenario defined
as a function of controllable and uncontrollable inputs (input functions) and produces a time function of
the system outputs (i.e., output functions or trajectories).</p>
        <p>Input functions for the SUV are operational scenarios, and belong to a finitely parametrizable set of
possible scenarios that the system must withstand (stochastic environment), which can be modeled
via a finite real- or discrete-valued parameter vector. A probability density function is defined over
the system operational environment, representing the likelyhoold that each scenario materializes.
Note that an environment is finitely parametrizable when a bijection exists between the environment
parameter space and the space of system inputs. In other words, the operational environment translates
a parametrization of an operational scenarios into an input function for the SUV.</p>
        <p>A KPI encodes a metric measure of properties of interest for which system-level specifications must
be satisfied. Since the values of the system KPI is the only output we need from the system, we model
its computation within the system model and define it to be the only output of the system. This leads to
Definition 1, which defines a black-box CPS model.</p>
        <p>
          Definition 1. A black-box CPS model  is a tuple (ℋ, ,  ) where  ⊆ R is the environment parameter
space with  &gt; 0,  :  → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] is a probability density function over  , and ℋ :  → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] is the
system I/O function that computes a metric KPI for a parametrization of an operational scenario. Namely,
ℋ(w) defines the values of the system KPI (normalized between 0 and 1) when the SUV is given as input the
operational scenario encoded by parameter .
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. QG-verification problem</title>
        <p>In our framework, ensuring that a system is quality guaranteed means establishing whether the expected
value of the KPI of interest is below a given threshold  .</p>
        <p>Definition 2. A QG-verification problem is a tuple (ℋ, , ,  ), where
• (ℋ, ,  ) is a black-box CPS model.</p>
        <p>•  is a specification threshold that the expected value of the KPI must not exceed.</p>
        <p>A solution for the QG-verification problem is an yes/no answer based on establishing whether
 = E∈ [ℋ()] ≤  holds, where E∈ is the expectation over the environment parameter space
 . As a result, solving the QG-verification problem means computing the expected value  over the
entire set of operational scenarios.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Estimating KPIs via Statistical Model Checking</title>
      <sec id="sec-4-1">
        <title>4.1. SMC-based QG-verification problem</title>
        <p>An exact computation of  , however, may be impossible since the scenario parameter space could be
infinite or, anyway, prohibitive. To address this obstacle, Monte Carlo methods can be used to compute
an estimate ^ of  . Whereas empirical Monte Carlo methods do not provide any formal (quality-)
guarantees of the computed approximation (and this is an unviable option for safety/mission-critical
domains), SMC algorithms in [33, 34] are (,  )-approximation algorithms designed to output, with
probability at least 1 −  , an estimate ^ for which |^ −  | &lt;  holds given ,  ∈ (0, 1). Namely,
 {(1 −  ) ≤ ^ ≤ (1 +  ) } ≥ 1 −  . From this, we obtain that, with probability at least 1 −  ,
^ ^
1+ ≤  ≤ 1−  holds. Hence, the QG-verification problem is cast to an SMC-based QG-verification
problem, whose solution is a yes/no answer that consists of establishing whether 1−^  ≤  holds.
4.2.  and EB(G)Stop
According to Definition 1, we restrict our analysis to SAs such that: 1) they output ^ ∈ (0, 1]; 2) they use
mean and variance to adaptively determine when enough samples have been produced. In particular,
we focus on the Approximation Algorithm () [33] and the Empirical Bernstein Stopping (EBStop) [34].</p>
        <p>
          is a three-phase algorithm that computes an approximation ^ of the mean  &gt; 0 of a
nonnegative random variable in [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. With ,  ∈ (0, 1) given as input, the first phase outputs an initial
(min {1/2, √ } , / 3)-approximation ˜ with 1 samples, which is in turn exploited to provide an
approximation ^ of the variance  with 2 samples in the second phase. Finally, the third phase uses ˜
and ^ to compute an (,  )-approximation ^ of  . Hence, the overall number of samples required to
produce the estimate is  = 1 + 2 + 3.
        </p>
        <p>EBStop is a sequential algorithm that outputs an (,  )-approximation ^ of the expected value  ̸= 0
of a random variable in [, ] with ,  ∈ (0, 1) given as input. It uses the Bernstein Inequality, together
with the sample mean and variance, to establish when enough samples have been produced and the
stopping criterion has been reached. In [34], EBGStop, an improved version of EBStop, is provided. It
incorporates the geometric sampling schedule, resulting in a tighter confidence interval. This feature
ensures EBGStop reaches the stopping condition after fewer samples than EBStop. Hence, we select
EBGStop for our purposes.</p>
        <p>In [34], the authors claim that if ln(   ) is significantly larger than 1/ , then  would outperform
EBGStop, where  is the range of the random variable ( is always 1 in our framework), Being,
however, a qualitative analysis, it is not possible to select a priori the most suitable algorithm for
a given verification problem; the KPI distribution is typically unknown, and which algorithm takes
fewer samples between  and EBStop for given ,  is hard to predict. This is a crucial drawback,
especially in the context of simulation-based verification of CPSs where producing each sample requires
to numerically simulate the system model under the randomly generated scenarios, and each simulation
may take seconds to minutes, depending on the complexity of the system.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental results</title>
      <p>This section outlines the impact of using  and EBGStop for SMC-driven simulation-based verification
on a Simulink/Stateflow case study. Our experiments show the benefit ( i.e., the reduction in the number
of samples and completion time) if we would able to select in advance the algorithm that takes fewer
samples at hand for several configuration of  and  .</p>
      <sec id="sec-5-1">
        <title>5.1. Case Study</title>
        <p>We selected the Apollo Lunar Module Autopilot (ALMA) model as an industry-scale system that has
been widely adopted in the literature on simulation-based verification ( e.g., see [59, 60]). It defines the
logic of the phase-plane control algorithm for the autopilot program of the lunar module used in the
Apollo 11 mission. The ALMA model is equipped with 3 sensors (yaw, pitch, roll) and 16 reaction jets
that actuate a rotation over one or more axes. In every space mission (system simulation), the controller
takes as input a request to change the module’s attitude and computes which reaction jets need to be
activated to achieve the desired rotation. The goal of the system is to successfully perform the rotation
and stabilize itself at the new attitude.</p>
        <p>We supplied the system with a stochastic environment that injects temporary faults (white noise
signals) on yaw, pitch and roll sensors using finite states machines (Stateflow charts). The faults
occurrences are chosen according to an exponential distribution whose mean corresponds to a Mean
Time Between Failures (MTBF) of 4 s, whereas the fault recovery happens after a duration determined
by an exponential distribution with a mean of 1 s.</p>
        <p>The system KPI quantifies the outcome of a given space mission in terms of the average module’s
attitude error over the entire simulation. Let () be the system attitude at time , and let  be the
target attitudes for the axes  ∈ {, , } (i.e., yaw, pitch and roll). We define the error attitudes as
107
ign106
l)e sva105
s top104
a
c
,l-god SEBG110023
s
ceon 0
s 102
(
g
ivn g103
sae ivan104
iTm s110056
107
108
δ=10-1
δ=5x10-2
δ=10-2
() = |() − | for each axis . We define our KPI on top of such indicators, that is, the Normalised
Mean Attitude Module Error (NMAME) at time  defined as 21 ∫︀0 max{( ), ( ), ( )} . It
represents the average over time of the maximum error on the three axes. Note that 2 is a normalization
factor since the axes values within a rotation range in [0, 2 ].</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Results</title>
        <p>We conducted simulation campaigns with a horizon ℎ = 60 seconds to compute (,  )-approximations
for the ALMA model for 9 diferent values of  (ranging from 1 × 10− 3 to 1 × 10− 1) and 3 values for 
(1 × 10− 2, 5 × 10− 2, 1 × 10− 1). Each experiment has been repeated 10 times, with 10 diferent sequences
of ... scenarios. Simulating the system model under each scenario took on average 7.1654 on each of
our identical machines (2 AMD EPYC 7301 CPUs with 32 cores, and 256 GB RAM). To avoid simulating
the same sequence of scenarios multiple times, we instrumented our tool to store both simulation time
and KPI values for each scenario in a centralized database. This allowed us to accurately compute the
completion time of our algorithm under diferent values for  and  , when run starting from each of the
10 random seeds.</p>
        <p>Figure 1 and Figure 2 show, respectively, the absolute saving in the number of samples and completion
time for several values of  and  on the ALMA system. If an experiment lands on the upper side of
the (log-scale) y-axis, EBGStop is the winner, i.e., it takes the lower number of samples (Figure 1) or
time (Figure 2) to output an (,  )-approximation of the KPI; otherwise,  wins. For each  , Figure 1
describes the behaviour of all i.i.d. sequences of scenarios (random seeds) obtained for all values of
 . It shows a sample saving ranging from 103 to 107 samples (i.e., simulations) up to a maximum of
31 300 400 with a relative reduction as large as 69.13%. Figure 2, instead, quantifies the actual benefit
we would obtain on a highly parallel architecture with 512 simulators. For each value of  , it illustrates
the reduction of the verification time (across the 10 i.i.d. random sequences of scenarios) obtained when
using the algorithm requiring fewer samples. The figure shows that although both algorithms are known
to be (quasi)-optimal (in the sense of Section 1), if we were able to select upfront the algorithm requiring
the fewest samples for the actual verification task at hand, we would have cut down completion times
up to 68.41% and up to more than 105 seconds (i.e., more than 1 day of computation).</p>
        <p>Figure 1 and Figure 2 delineate that several existing factors may determine the winning algorithm
between  and EBGStop. Based on the qualitative analysis in [34], ln(   ) must be significantly
larger than (i.e., ideally several orders of magnitude greater) than 1/ so that  would outperform
EBGStop. However, if we consider  = 0.10− 128 , where 0.0128 is the average ^ estimated with the
lowest  and  , we can draw diferent observations. For example, with  = 10− 3 and  = 10− 1, 
always outperforms EBGStop, where ln(   ) = 11.26 is not significantly larger than 1/ = 10. With
 = 5 × 10− 3 and  = 10− 1, EBGStop outperforms  and vice versa according to the i.i.d. sequence
of operational scenarios taken into account for the estimate with ln(   ) = 9.65 smaller than 1/ = 10.
This introduces the sequence of scenarios as an additional source of uncertainty for a priori selecting
the fastest (,  )-approximation algorithm, which makes such a prediction unviable in practice.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this short paper, we focused on optimizing the model-based verification of CPSs through SAs. We
observed a substantial reduction in the verification cost if the most suitable algorithm were used.
However, our preliminary results suggest that predicting such an algorithm for SMC-driven verification
of CPSs is an obstacle to overcome.</p>
      <p>We are currently working on developing solutions to assist the user in selecting the most suitable
algorithm to formally verify CPSs in safety/mission-critical domains. Indeed, in very recent work
[70], we propose an approach that runs in parallel a finite set of SAs in a similar fashion to AI-based
techniques such as ensemble learning. This approach is particularly efective if the numerical simulation
of the system model is the the slowest step in the verification pipeline, as is the case for complex CPSs.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was partially supported by: Italian Ministry of University and Research under grant
“Dipartimenti di eccellenza 2018–2022” of the Department of Computer Science of Sapienza
University of Rome; INdAM “GNCS Project 2023”; Sapienza University projects RG12117A8B393BDC,
RG11916B892E54DB, RG120172B9329D33, RM120172B9F35634, RG123188B482D2D9; Lazio POR FESR
projects E84G20000150006, F83G17000830007; Project funded under the National Recovery and
Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.5 - Call for tender No. 3277 of 30 December
2021 of the Italian Ministry of University and Research funded by the European Union –
NextGenerationEU, award number: project ECS 0000024 Rome Technopole, Concession Decree No. 1051 of 23
June 2022 adopted by the Italian Ministry of University and Research, CUP B83C22002820006, Rome
Technopole.
[8] B. Leeners, T. Krüger, K. Geraedts, E. Tronci, T. Mancini, M. Egli, S. Röblitz, L. Saleh, K. Spanaus,
C. Schippert, Y. Zhang, F. Ille, Associations between natural physiological and supraphysiological
estradiol levels and stress perception, Front. Psycol. 10 (2019). doi:10.3389/fpsyg.2019.01296.
[9] F. Maggioli, T. Mancini, E. Tronci, SBML2Modelica: Integrating biochemical models within
openstandard simulation ecosystems, Bioinformatics 36 (2020). doi:10.1093/bioinformatics/
btz860.
[10] T. Mancini, F. Mari, A. Massini, I. Melatti, I. Salvo, S. Sinisi, E. Tronci, R. Ehrig, S. Röblitz, B. Leeners,
Computing personalised treatments through in silico clinical trials. A case study on downregulation
in assisted reproduction, in: RCRA 2018, volume 2271 of CEUR W.P., CEUR, 2018.
[11] S. Sinisi, V. Alimguzhin, T. Mancini, E. Tronci, F. Mari, B. Leeners, Optimal personalised treatment
computation through in silico clinical trials on patient digital twins, Fundam. Inform. 174 (2020).
doi:10.3233/FI-2020-1943.
[12] M. Esposito, L. Picchiami, Simulation-based synthesis of personalised therapies for colorectal
cancer, in: OVERLAY 2021, volume 2987 of CEUR W.P., CEUR, 2021.
[13] M. Esposito, L. Picchiami, Intelligent search for personalized cancer therapy synthesis: an
experimental comparison, in: IPS/RCRA 2021, volume 3065 of CEUR W.P., CEUR, 2021.
[14] M. Esposito, L. Picchiami, A comparative study of AI search methods for personalised cancer
therapy synthesis in COPASI, in: AI*IA 2021, Rev. Sel. Papers., volume 13196 of LNCS, Springer,
2022. doi:10.1007/978-3-031-08421-8_44.
[15] L. Planke, Y. Lim, A. Gardi, R. Sabatini, T. K., N. Ezer, A cyber-physical-human system for
one-tomany uas operations: Cognitive load analysis, Sensors 20 (2020). doi:10.3390/s20195467.
[16] Z. Wu, N. Huang, X. Zheng, X. Li, Cyber-physical avionics systems and its reliability evaluation,
in: IEEE CYBER 2014, IEEE, 2014. doi:10.1109/CYBER.2014.6917502.
[17] K. Sampigethaya, R. Poovendran, Aviation cyber-physical systems: Foundations for future aircraft
and air transport, Proc. IEEE 101 (2013). doi:10.1109/JPROC.2012.2235131.
[18] M. Esposito, T. Mancini, E. Tronci, Optimizing fault-tolerant quality-guaranteed sensor
deployments for UAV localization in critical areas via computational geometry, IEEE Trans. Sys. Man
Cyb.: Sys.’s 54 (2024). doi:10.1109/TSMC.2023.3327432.
[19] R. Alur, Principles of Cyber-Physical Systems, MIT, 2015.
[20] G. Della Penna, B. Intrigila, I. Melatti, M. Minichino, E. Ciancamerla, A. Parisse, E. Tronci, M.
Venturini Zilli, Automatic verification of a turbogas control system with the Murphi verifier, in:
HSCC 2003, volume 2623 of LNCS, Springer, 2003. doi:10.1007/3-540-36580-X_13.
[21] G. Della Penna, B. Intrigila, I. Melatti, E. Tronci, M. Venturini Zilli, Finite horizon analysis of</p>
      <p>Markov chains with the Murphi verifier, STTT 8 (2006). doi: 10.1007/s10009-005-0216-7.
[22] M. Cadoli, T. Mancini, F. Patrizi, SAT as an efective solving technology for constraint problems,
in: ISMIS 2006, volume 4203 of LNCS, Springer, 2006. doi:10.1007/11875604_61.
[23] M. Cadoli, T. Mancini, Combining relational algebra, SQL, constraint modelling, and local search,</p>
      <p>TPLP 7 (2007). doi:10.1017/S1471068406002857.
[24] T. Mancini, P. Flener, J. Pearson, Combinatorial problem solving over relational databases: View
synthesis through constraint-based local search, in: SAC 2012, ACM, 2012. doi:10.1145/2245276.
2245295.
[25] G. Gottlob, G. Greco, T. Mancini, Conditional constraint satisfaction: Logical foundations and
complexity, in: IJCAI 2007, 2007.
[26] T. Mancini, M. Cadoli, D. Micaletto, F. Patrizi, Evaluating ASP and commercial solvers on the</p>
      <p>CSPLib, Constraints 13 (2008). doi:10.1007/s10601-007-9028-6.
[27] L. Bordeaux, M. Cadoli, T. Mancini, CSP properties for quantified constraints: Definitions and
complexity, in: AAAI 2005, AAAI, 2005.
[28] T. Mancini, E. Tronci, A. Scialanca, F. Lanciotti, A. Finzi, R. Guarneri, S. Di Pompeo, Optimal
fault-tolerant placement of relay nodes in a mission critical wireless network, in: RCRA 2018,
volume 2271 of CEUR W.P., CEUR, 2018.
[29] Q. Chen, A. Finzi, T. Mancini, I. Melatti, E. Tronci, MILP, pseudo-boolean, and OMT solvers for
optimal fault-tolerant placements of relay nodes in mission critical wireless networks, Fundam.</p>
      <p>Inform. 174 (2020). doi:10.3233/FI-2020-1941.
[30] T. Mancini, F. Mari, I. Melatti, I. Salvo, E. Tronci, An eficient algorithm for network
vulnerability analysis under malicious attacks, in: ISMIS 2018, Springer, 2018. doi:10.1007/
978-3-030-01851-1_29.
[31] T. Mancini, F. Mari, A. Massini, I. Melatti, F. Merli, E. Tronci, System level formal verification
via model checking driven simulation, in: CAV 2013, volume 8044 of LNCS, Springer, 2013.
doi:10.1007/978-3-642-39799-8_21.
[32] T. Mancini, F. Mari, A. Massini, I. Melatti, E. Tronci, System level formal verification via distributed
multi-core hardware in the loop simulation, in: PDP 2014, IEEE, 2014. doi:10.1109/PDP.2014.
32.
[33] P. Dagum, R. Karp, M. Luby, S. M. Ross, An optimal algorithm for Monte Carlo estimation, SICOMP
29 (2000). doi:10.1137/S0097539797315306.
[34] V. Mnih, C. Szepesvári, J. Audibert, Empirical Bernstein stopping, in: ICML 2008, ACM, 2008.</p>
      <p>doi:10.1145/1390156.1390241.
[35] G. Agha, K. Palmskog, A survey of statistical model checking, ACM Trans. Model. Comput. Simul.</p>
      <p>28 (2018). doi:10.1145/3158668.
[36] A. Pappagallo, A. Massini, E. Tronci, Monte Carlo based statistical model checking of cyber-physical
systems: a review, Inf. 11 (2020). doi:10.3390/info11120588.
[37] A. Legay, S. Sedwards, L. Traonouez, Scalable verification of markov decision processes, in: Soft.</p>
      <p>Engi. and Form. Meth., Springer, 2015. doi:10.1007/978-3-319-15201-1_23.
[38] A. David, D. Du, K. Guldstrand Larsen, A. Legay, M. Mikučionis, Optimizing control strategy using
statistical model checking, in: NFM 2013, volume 7871 of LNCS, Springer, 2013.
[39] S. Sinisi, V. Alimguzhin, T. Mancini, E. Tronci, B. Leeners, Complete populations of virtual patients
for in silico clinical trials, Bioinformatics 36 (2020). doi:10.1093/bioinformatics/btaa1026.
[40] K. Sen, M. Viswanathan, G. Agha, On statistical model checking of stochastic systems, in: CAV 2005,
volume 3576 of LNCS, Springer, 2005.
[41] K. Sen, M. Viswanathan, G. Agha, "statistical model checking of black-box probabilistic systems",
in: CAV 2004, volume 3114 of LNCS, Springer, 2004.
[42] H. Younes, Probabilistic verification for “black-box” systems, in: CAV 2005, volume 3576 of LNCS,</p>
      <p>Springer, 2005. doi:10.1007/11513988_25.
[43] H. Younes, R. Simmons, Probabilistic verification of discrete event systems using acceptance
sampling, in: CAV 2002, volume 2404 of LNCS, Springer, 2002. doi:10.1007/3-540-45657-0_17.
[44] C. Baier, B. Haverkort, H. Hermanns, J.-P. Katoen, Model-checking algorithms for continous-time
markov chains, IEEE Trans. Softw. Eng. 29 (2003). doi:10.1109/TSE.2003.1205180.
[45] G. Norman, D. Parker, J. Sproston, Model checking for probabilistic timed automata, Form. Meth.</p>
      <p>Sys. Des. 43 (2013). doi:10.1007/s10703-012-0177-x.
[46] H. Hansson, B. Jonsson, A logic for reasoning about time and reliability, Form. Asp. Comp. 6
(1994). doi:10.1007/BF01211866.
[47] P. Zuliani, Statistical model checking for biological applications, STTT 17 (2015). doi:10.1007/
s10009-014-0343-0.
[48] E. Tronci, T. Mancini, I. Salvo, S. Sinisi, F. Mari, I. Melatti, A. Massini, F. Davi’, T. Dierkes, R. Ehrig,
S. Röblitz, B. Leeners, T. Krüger, M. Egli, F. Ille, Patient-specific models from inter-patient biological
models and clinical records, in: FMCAD 2014, IEEE, 2014. doi:10.1109/FMCAD.2014.6987615.
[49] E. Clarke, P. Zuliani, Statistical model checking for cyber-physical systems, in: ATVA 2011,
volume 11, Springer, 2011.
[50] A. Atallah, G. Hamad, O. Mohamed, Automotive safety verification under temporal failure
of adaptive cruise control system using statistical model checking, in: EDiS 2017, IEEE, 2017.
doi:10.1109/EDIS.2017.8284024.
[51] J. Martins, A. Platzer, J. Leite, Statistical model checking for distributed probabilistic-control hybrid
automata with smart grid applications, in: ICFEM 2011, volume 6991 of LNCS, Springer, 2011.
doi:10.1007/978-3-642-24559-6_11.
[52] S. Sebastio, A. Vandin, MultiVeStA: Statistical model checking for discrete event simulators, in:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hayes</surname>
          </string-name>
          , I. Melatti,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Prodanovic</surname>
          </string-name>
          , E. Tronci,
          <article-title>Residential demand management using individualised demand aware price policies</article-title>
          ,
          <source>IEEE Trans. Smart Grid</source>
          <volume>8</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1109/TSG.
          <year>2016</year>
          .
          <volume>2596790</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mari</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Melatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Salvo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tronci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gruber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Prodanovic</surname>
          </string-name>
          , L. Elmegaard,
          <article-title>Demand-aware price policy synthesis and verification services for smart grids</article-title>
          ,
          <source>in: SmartGridComm</source>
          <year>2014</year>
          , IEEE,
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .1109/SmartGridComm.
          <year>2014</year>
          .
          <volume>7007745</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Melatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Prodanovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tronci</surname>
          </string-name>
          ,
          <article-title>A two-layer near-optimal strategy for substation constraint management via home batteries</article-title>
          ,
          <source>IEEE Trans. Ind. Elect</source>
          .
          <volume>69</volume>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1109/TIE.
          <year>2021</year>
          .
          <volume>3102431</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mari</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Melatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Salvo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tronci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gruber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Prodanovic</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Elmegaard,</surname>
          </string-name>
          <article-title>User flexibility aware price policy synthesis for smart grids</article-title>
          ,
          <source>in: DSD</source>
          <year>2015</year>
          , IEEE,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          . 1109/DSD.
          <year>2015</year>
          .
          <volume>35</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Masrur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lukasiewycz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Voit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Annaswamy</surname>
          </string-name>
          ,
          <article-title>Challenges in automotive cyber-physical systems design</article-title>
          ,
          <source>in: SAMOS</source>
          <year>2012</year>
          , IEEE,
          <year>2012</year>
          . doi:
          <volume>10</volume>
          . 1109/SAMOS.
          <year>2012</year>
          .
          <volume>6404199</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          , M. Al Faruque,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Automotive cyberphysical aystems: A tutorial introduction</article-title>
          ,
          <source>IEEE Des.&amp;Test</source>
          <volume>33</volume>
          (
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .1109/MDAT.
          <year>2016</year>
          .
          <volume>2573598</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hengartner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kruger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Geraedts</surname>
          </string-name>
          , E. Tronci,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ille</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Egli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roeblitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ehrig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Saleh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Spanaus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schippert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Leeners</surname>
          </string-name>
          ,
          <article-title>Negative afect is unrelated to fluctuations in hormone levels across the menstrual cycle: Evidence from a multisite observational study across two successive cycles</article-title>
          ,
          <source>J. Psycho. Res</source>
          .
          <volume>99</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1016/j.jpsychores.
          <year>2017</year>
          .
          <volume>05</volume>
          .018.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>