<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Efficient Adversarial Sequence Generation for RNN with Symbolic Weighted Finite Automata</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mingjun Ma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dehui Du</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuanhao Liu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yanyun Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yiyang Li</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Software Engineering Institute East China Normal University</institution>
          ,
          <addr-line>Shanghai</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1996</year>
      </pub-date>
      <volume>153</volume>
      <fpage>3</fpage>
      <lpage>6</lpage>
      <abstract>
        <p>Adversarial sequence generation plays an important role in improving the robustness of Recurrent Neural Networks (RNNs). However, there is still a lack of effective methods for RNN adversarial sequence generation. Due to the particular cyclic structure of RNN, the efficiency of adversarial attacks still need to be improved, and their perturbation is uncontrolled. To deal with these problems, we propose an efficient adversarial sequence generation approach for RNN with Symbolic Weighted Finite Automata (SWFA). The novelty is that RNN is extracted to SWFA with the symbolic extracting algorithm based on Fast k-DCP. The symbolic adversarial sequence can be generated in the symbolic space. It reduces the complexity of perturbation to improve the efficiency of adversarial sequence generation. More importantly, our approach keeps perturbation as much as possible within the human-invisible range. The feasibility of the approach is demonstrated with some autonomous driving datasets and several UCR time-series datasets. Experimental results show that our approach outperforms the state-of-art attack methods with almost 112.92% improvement and 1.44 times speedup in a human-invisible perturbation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        While Recurrent Neural Networks (RNNs) have made great
breakthroughs in safety-critical domains in recent years,
such as autonomous driving
        <xref ref-type="bibr" rid="ref20">(Li et al. 2020)</xref>
        and surgical
robots
        <xref ref-type="bibr" rid="ref28">(Zhou et al. 2020)</xref>
        . Such AI-powered applications are
facing the challenge of trustworthiness because of its
vulnerability to adversarial attacks (Szegedy et al. 2013a).
      </p>
      <p>
        However, most of the adversarial generation algorithms
        <xref ref-type="bibr" rid="ref10 ref25">(Szegedy et al. 2013a; Goodfellow, Shlens, and Szegedy
2014; Wei, Guo, and Li 2021)</xref>
        designed for Feed-forward
Neural Networks (FNNs) are not very good at handling
RNNs, which has a particular cyclic structure. When
dealing with the input data in the form of a long sequence,
traditional adversarial example generation algorithms face two
challenges. The first challenge is that the sequence data
usually reflects more trend information compared with the
image data, thereby the perturbations on sequences are easier
to perceive, which increases the difficulty of generating
adversarial sequences. The second one is that, as the sequence
length increases, the complexity of the perturbation will also
increase exponentially. This ultimately makes it more
difficult to craft adversarial examples on sequential data.
      </p>
      <p>
        Another attempt to improve the robustness of RNN is
learning a simpler interpretable model such as finite
automata from the trained RNN (Omlin, Thornber, and Giles
1996; Zhang et al. 2021). Among those automata models,
Symbolic Weighted Finite Automata (SWFA) (Suzuki et al.
2021) is the most promising model because it can mimic
RNN to perform real-value operations and accept infinite
input. However, the existing work about extracting SWFA
from RNN is relatively lacking, and the ability of SWFA to
perform real-value operations has not been fully explored.
Many valuable properties have yet to be explored, such as
model checking
        <xref ref-type="bibr" rid="ref4">(Clarke et al. 2018)</xref>
        and adversarial attacks
on that.
      </p>
      <p>In summary, adversarial attacking methods still suffer
from their low efficiency on the RNN with complex cyclic
structures. In order to deal with this problem, we propose an
efficient adversarial sequence generation approach with the
SWFA extracting algorithm. The aim is to utilize the
symbolic abstraction technique to improve the efficiency of
adversarial sequence generation and control the size of
perturbation within the human-invisible range.</p>
      <p>Our main contributions can be summarized as follows:
• An efficient adversarial sequence generation approach
for RNN by SWFA is proposed. The key is to extract
SWFA from RNN with the symbolic extraction
algorithm.
• The proposed adversarial sequence generation algorithm
has been implemented, which outperforms the
state-ofart attack methods with almost 112.92% improvement
and 1.44 times speedup.
• The experiments on the autonomous driving dataset show
that the adversarial sequences generated by our approach
are more covert compared with the state-of-the-art
methods.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        There have been many efforts made to improve the
robustness of RNN, and adversarial example generation is one of
the most popular methods. Depending on how much
information an adversary can access, adversarial attacks can be
classified into two categories: white-box attack
        <xref ref-type="bibr" rid="ref11">(Szegedy
et al. 2013b; Goodfellow, Shlens, and Szegedy 2015)</xref>
        and
black-box attack
        <xref ref-type="bibr" rid="ref19">(Kurakin, Goodfellow, and Bengio 2016;
Chen et al. 2017)</xref>
        . As for whether the adversarial attack
algorithms applied in the FNN are effective on RNN, it has been
discussed in
        <xref ref-type="bibr" rid="ref19">(Kurakin, Goodfellow, and Bengio 2016)</xref>
        .
However, the research on adversarial example generation
optimized for RNN is rare, which is still a challenging problem.
      </p>
      <p>
        Besides, the extracting abstract models from RNN has
attracted more attention recently. Most are focused on
extracting Label Transaction System (LTS)
        <xref ref-type="bibr" rid="ref13">(Gorrieri 2017)</xref>
        ,
such as DFA (Omlin, Thornber, and Giles 1996), WFA
        <xref ref-type="bibr" rid="ref1 ref26">(Ayache, Eyraud, and Goudian 2018; Weiss, Goldberg, and
Yahav 2018; Zhang et al. 2021)</xref>
        , DTMC
        <xref ref-type="bibr" rid="ref7">(Du et al. 2019)</xref>
        , and
PFA
        <xref ref-type="bibr" rid="ref5">(Dong et al. 2019)</xref>
        . But the existing work aims at
outputting boolean values, which is not suitable for the
realworld RNN applications. This challenge motivates the
extraction of the quantitative finite state machine as an
abstraction of RNN. WFA is a powerful abstract model for RNN
because it outputs the real value. For WFA extraction work,
there are spectral techniques
        <xref ref-type="bibr" rid="ref1">(Ayache, Eyraud, and Goudian
2018)</xref>
        , L* query learning algorithm
        <xref ref-type="bibr" rid="ref26">(Weiss, Goldberg, and
Yahav 2018)</xref>
        , and Decision-guide approach
        <xref ref-type="bibr" rid="ref7">(Du et al. 2019)</xref>
        .
      </p>
      <p>This work (Suzuki et al. 2021) proposes the query
learning algorithm for Symbolic Weighted Finite Automata
(SWFA). As an extension of WFA, SWFA can handle strings
over infinite alphabets more efficiently, which generalizes
WFAs by allowing transitions to be functions from a
possibly infinite alphabet to weights. Inspired by their work, we
propose Fast k-DCP to extract SWFA from RNN, which can
be utilized to improve the efficiency of adversarial sequence
generation.</p>
    </sec>
    <sec id="sec-3">
      <title>Preliminaries</title>
      <p>This section briefly introduces some necessary
preliminaries and notations to understand our approach. Specially, we
summarize the essence of the adversarial sequence
generation approach and the symbolic abstracting technology.</p>
      <sec id="sec-3-1">
        <title>Recurrent Neural Network</title>
        <p>
          Recurrent Neural Network is denoted as a recursive
function h(t)(⃗x) = f (h(t− 1)(⃗x), ⃗x(t− 1)), representing the state
transitions during the learning process. h(t)(⃗x) ∈ Rn is the
hidden state at time t, which is calculated by the input at
time t and the hidden state at time t − 1 together. RNN can
also be formulated as a tuple
          <xref ref-type="bibr" rid="ref23">(Michalenko et al. 2019)</xref>
          .
Definition 1 (Recurrent Neural Network) RNN is
denoted as a 6-tuple R = (H, X, Y, h0, f, g), where
• H denotes the set of hidden states.
• X is the possible input space.
• Y is the final output of RNN.
• h0 is the initial hidden state.
• f : H ×
• g : H → Y denotes the output function.
        </p>
        <p>X → H denotes the transition function.</p>
        <p>Note: During the entire RNN operation, f is called
repeatedly, and g is called only once for the last output. And
for classification tasks, g is specified as a softmax function.
f and g are shared between different time steps t in an RNN.</p>
        <p>As for the input data, we define the dataset with n samples
as D = {(xi, li) |i = 1, 2, . . . , n}, where x ∈ X is an input
sample and l is the true label of the sample. For classification
tasks, l is in a finite set L containing all possible labels. A
single sample x ∈ X with m-sequence is in the form like a
vector x = [x(1), x(2), . . . , x(i), . . . , x(m)], which includes
the input at different time steps. x(i) is the input value at the
i-th time step on sample x. Input space matrix X ∈ Rn× m,
where n denotes the number of samples and m indicates the
sequence length. The matrix Y ∈ Rn×| L| is used to store the
ifnal output, which is the possibility of each label for each
sample in an RNN classification task.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Symbolic Weighted Finite Automata</title>
        <p>A Symbolic Weighted Finite Automata (SWFA) Υ over G
can be formulated as a tuple (Suzuki et al. 2021).</p>
      </sec>
      <sec id="sec-3-3">
        <title>Definition 2 (Symbolic Weighted Finite Automata)</title>
        <p>SWFA is denoted as a 5-tuple Υ = ( G, Q, α, β, A ), the set
of all functions from X to Y is denoted by Y X .where
• G is a class of guard functions from an alphabet Σ to a
semiring S.
• Q denotes the finite set of states.
• α is a vector of the initial weights ∈ SQ.
• β is a vector of the final weights ∈ SQ.
• A denotes a transition relation ∈ GQ× Q, Aσ
represents the transition relation for σ ∈ Σ , Aσ [q, q′] =
A [q, q′] (σ ) for all (q, q′) ∈ Q2. Aσ can be extended
for strings w ∈ Σ ∗ as Aw = Aσ 1 Aσ 2 . . . Aσ l , where
w = σ 1σ 2 . . . σ l, with σ i ∈ Σ . The formal power series
fΥ represented by Υ is defined by fΥ (w) = α T Awβ for
all w ∈ Σ ∗ .</p>
        <p>Example 1 Let Σ = {− 1, 0, 1} , Q = {q1, q2}, α T =
[1 0], A = 2.0x2.+0 1.0 − x2x , β = − 21 01 .
If the input is x = [1, − 1], the result is obtained by
multiplying a series of matrices as follows.</p>
        <p>α T · A− 1 · A1 · β = [5</p>
        <p>And such an SWFA can also be presented in a graphical
view, as shown in Fig.1. Each circle represents a state q, the
label on each edge from q to q′ is the assigned guard function
from Σ to Q, which represents its symbolic nature.</p>
        <p>As we can see, the symbolic representation of the weight
on transitions enhances the abstraction ability of WFA. It is
very suitable to model the infinite alphabet of RNN’s input.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Adversarial Example Generation</title>
        <p>
          Adversarial example generation is a technology that uses
adversarial attack methods to generate adversarial examples.
An adversarial example refers to a sample formed by
artiifcially adding subtle perturbations that are invisible to the
naked eyes. Such samples will cause the trained model to
misclassify the output
          <xref ref-type="bibr" rid="ref15">(Huang et al. 2020)</xref>
          . An adversarial
example can be denoted as:
arg min Hi (x′) = arg min Hi (x)
i i
∧ ||x − x′||p ≤ d
∧ arg max fj (x′) ̸= arg max fj (x)
        </p>
        <p>j j
s.t. p ≥ 1, p ∈ N, d &gt; 0.</p>
        <p>H stands for the human inception, and f denotes the deep
neural network’s judgment. Within the subtle perturbation
d, the model makes the wrong classification. An adversarial
example has two characteristics, one is that the perturbation
is so small that humans cannot notice it easily. The other is
that the classification result may be wrong.</p>
        <p>Researchers (Szegedy et al. 2013a) have formulated the
search for adversarial examples as an optimization problem
to find the minimum ϵ , and used box-constraint L-BFGS to
conduct attacks.</p>
        <p>arg min f (x + ϵ ) = l, s.t. (x + ϵ ) ∈ D</p>
        <p>
          ϵ
The input x + ϵ is in the same domain D with x, but gets
a different label, which is an adversarial example. Though
the above optimization solution has a good performance,
it is computationally cost. Then an efficient adversarial
attack algorithm was introduced
          <xref ref-type="bibr" rid="ref10">(Goodfellow, Shlens, and
Szegedy 2014)</xref>
          . It is named as Fast Gradient Sign
Methodology (FSGM) which utilizes the gradient of the cost function
to generate the adversarial examples.
        </p>
        <p>Besides L-BFGS and FGSM, most of the adversarial
attack methods are extended based on iterative methods to
generate adversarial examples efficiently.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Main Approach</title>
      <p>In this section, we present the details of our adversarial
sequence generation approach. Fig. 2 shows the overall
framework. There are two phases in our approach.</p>
      <p>Symbolic Weighted Finite Automata Extraction. The
extracting algorithm extracts SWFA from the trained RNN. It
is the key part of the approach, which constructs the
symbolic state space. On the other hand, the symbolic input are
abstracted from datasets.</p>
      <p>
        Adversarial Sequences Generation by SWFA. We perturb
the symbolic input to get symbolic sequences. To accelerate
the generation process, we formally analyze the
reachability of SWFA to rapidly check whether the newly generated
symbolic sequence is an adversarial one. If the symbolic
sequence is judged as an adversarial symbolic sequence, we
can collect concrete adversarial sequences from symbolic
adversarial sequences by sampling techniques.
Definition 3 (Fast k-DCP) Fast k-DCP is adapted from
k-DCP
        <xref ref-type="bibr" rid="ref7">(Du et al. 2019)</xref>
        , which is used for SWFA extraction
from RNNs. k-DCP of a probability vector captures the top
k ranked class labels as well as their prediction confidence
levels. Compared with k-DCP, Fast k-DCP has the following
new features:
• High Efficiency: Different from k-DCP, Fast k-DCP
discards the time-consuming k-means clustering process
and divides hidden states of RNN into different symbolic
blocks directly.
• Symbolic Abstraction: We extend Fast k-DCP for the
inifnite alphabet, which deals with input symbolically.
      </p>
      <p>The cost of symbolic abstraction will increase as the
complexity of the learning task increases. It is not cost-effective
to generate a small batch of adversarial examples when the
task is complex; it has advantages in a scenario where the
task complexity is lower and a large number of adversarial
examples need to be generated. Assuming that the number
of samples is n, the sequence length is m, the feature
dimension is s, and the time complexity of symbolic
abstraction can be denoted as O(mns), as the complexity of the
task increases, which means that n, m, or s becomes larger,
the time cost of symbolic abstraction will also increase. The
space complexity of symbolic abstraction is O(T s). As the
feature dimension s increases, the space consumption will
also grow. But when the complexity of RNN increases, its
abstraction cost will not increase accordingly. The
abstraction cost is only related to the scale of the task. Therefore,
Algorithm 1: RNN-SWFA by Fast k-DCP
input : RNN R = (H, X, Y, h0, f, g)</p>
      <p>Input sequences W</p>
      <p>K, T
output: SWFA Υ = ( G, Q, α, β, A )
s = [h(i)(w)]|iw=|0;
for i = 1 to w do</p>
      <p>Q′.add(si);
Σ ′.add(wi− 1)
10 for q′ ∈ Q′ do
11 Q.add(f kdcpK,T (q′));
12 for σ ′ ∈ Σ ′ do
13 Σ .add(f kdcp|σ ′|,Tinput (σ ′));
14 for σ ∈ Σ do
15 Aσ = BuildT ransitionM atrix(σ );
16 A.add(Aσ );
17 for q ∈ Q do
18 β q = 0 with length |L|;
19 for q′ ∈ Q′ do
20 if f kdcpK,T (q′) == q then
21 β q[argmax(g(q′))]+ = 1;
β q = β q/ P(β q);
β.add (β q);
building SWFA has a comparative advantage in such a
scenario.</p>
      <p>The proposed algorithm for extracting SWFA from RNN
using Fast k-DCP is elaborated in Alg 1. It takes the target
RNN R, the input sequences W which is a set of word w,
the hyper-parameters K and T as inputs, and generates the
extracted SWFA.</p>
      <p>To get the symbolic input, we calculate Tinput (Line 3 to
Line 4), and then use Fast k-DCP to abstract the input (Line
12 to Line 13) into symbolic blocks. Another purpose of
calculating Tinput is to control the perturbation in a range that
is invisible to the human eye. It is a covert adversarial
example. And we strongly recommend that computeDistance
uses the Euclidean distance L2 as the distance metric since
the ideal task for our approach to show the advantage of
efficiency is the one whose dimension of input data is relatively
not too high, at which Euclidean distance has been found to
capture proximity more accurately (Gopinath et al. 2017).</p>
      <p>We execute R on the input sequences W and record the
hidden state traces as s (line 5 to Line 9). By this way, we
collect the step-wise configurations in a set of hidden states
Q′, as well as the alphabet Σ ′.</p>
      <p>By applying Fast k-DCP f kdcp twice (Line 10 to Line
13), we construct the set of abstract states Q and the
symbolic alphabet Σ of SWFA. The symbolic alphabet is
abstracted from the complete input, that is, the parameter K of
f kdcp here is set to the dimension of the input |σ ′|.</p>
      <p>For each symbolic input σ in Σ (Line 14 to Line 16), we
use BuildT ransitionM atrix to count transition frequency
among abstract states and then build the transition matrix Aσ
for σ . All the matrices are collected into A.</p>
      <p>At last (Line 17 to Line 23), the initial distribution
over the abstract states would be set to the one-point
format, that is, the initial state α of SWFA. And we build
the final vector β by calculating β q for each state. And
GuardF untionLearning learns the G from Q, α, β, A
(Suzuki et al. 2021). In short, extracting SWFA from RNN
is the main novelty of our approach. It utilizes the symbolic
state models to imitate the learning process of RNN. It helps
to generate the adversarial sequence effectively.</p>
      <sec id="sec-4-1">
        <title>Adversarial Sequence Generation</title>
        <p>In order to generate adversarial sequences efficiently, we
take full advantage of SWFA’s symbolic feature to perturb
its symbolic input. It reduces the complexity of
perturbation. Another novelty of our approach is to abstract the input
sequence with the symbolic technique. It is another
organizational scheme that abstracts original input into several
symbolic blocks.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Definition 4 (Symbolic Block) Symbolic Block is denoted</title>
        <p>as a 3-tuple Bsymbolic = (X, K, T ), where
• The parameter X means the input sequences.
• K denotes the dimension of the symbolic block. (Usually,</p>
        <p>K is less than 5)
• The parameter T denotes the granularity of the symbolic
block.</p>
        <p>The input is abstracted into several regions by Fast
kDCP, and the regions can be composed of rectangles, cubes,
or hypercubes according to K. Thus these equally divided
regions containing multiple real values are called as
Symbolic Blocks. It’s used to abstract the input space.
Example 2 Assuming the input is normalized into an
interval [0,1], we equally divide the input into T parts. As
shown in Fig. 3, there are 3 × 3 × 3 symbolic blocks where
K = 3 and T = 3. Each symbolic block is like a small cube,
and several symbolic blocks construct a vector space with a
total length of 1. Since the length of the vector is fixed, the
process of abstracting input into symbolic input is very
efficient and not time-consuming. We can quickly find another
similar input given a particular input based on the symbolic
block.</p>
        <p>We take Manhattan distance (L1) as the abstract distance
metric to measure the distance of the symbolic input, which
is shown as the following equation. x∗ denotes the symbolic
input.</p>
        <p>n
||x∗ ′ − x∗ ||1 = X |x∗i ′ − x∗i |
i=1
(1)</p>
        <p>The metric value is assumed as an integer when
expressing the distance of the symbolic input for the convenience
of the later perturbation and guiding the value of T . Though
L1 and L∞ meet this requirement, L∞ is not suitable due to
its poor experimental results, so we choose L1 for the
symbolic distance measurement. With the help of the distance
metric of the symbolic input, we can reduce the complexity
of perturbation.</p>
        <p>The difference between the abstract perturbation and the
concrete perturbation is shown below.</p>
        <p>||x′ − x||2 = ϵ
||x∗ ′ − x∗ ||1 = ϵ ∗</p>
        <p>Equation (2) means that under the L2 (Manhattan
distance) metric, the distance between the disturbed sample and
the original sample is denoted as ϵ . Equations (3) describes
the distance between the symbolic input x∗ . Its perturbed
value x∗ ′ , measured by L1, and ϵ ∗ denotes the abstract
perturbation. Since we have δ = T1 ∗ (Datamax− Datamin), and
the input is normalized in the early stage. We get Dmax = 1,
Dmin = 0, so the relationship between the abstract
perturbation and the concrete perturbation can be denoted as
δ = 1/T , which is also the unit of abstract perturbation.
Configuration of Tinput It is very important to set an
appropriate Tinput to get the symbolic input, which determines
the granularity of symbolic blocks and controls the abstract
perturbation within the human-invisible range as expected.
The value of Tinput should satisfy the following property.
δ =</p>
        <p>1
Tinput</p>
        <p>≤
Tinput ≥</p>
        <p>Pn
i=2 |xi − xi− 1|</p>
        <p>n − 1
n − 1
Pn</p>
        <p>i=2 |xi − xi− 1|</p>
      </sec>
      <sec id="sec-4-3">
        <title>Perturbation and Instantiation In this part, we will it</title>
        <p>eratively increase ϵ ∗ to search for symbolic adversarial
sequence and instantiate them to concrete adversarial
sequences by sampling techniques, such as random sampling.
Based on the concept of Symbolic Block and Abstract
Distance Metric, we can perturb the sequences (as shown in
Fig. 4)in an efficient way. The main idea is shown in Alg.2.</p>
        <p>The first step as shown (Alg.2, line 3) is to find the nodes
that have the greatest impact on the sequence classification
(2)
(3)
(4)
(5)</p>
      </sec>
      <sec id="sec-4-4">
        <title>Algorithm 2:</title>
        <p>Adversarial Sequence Generation by SWFA
input : Input x0;</p>
        <p>Continuous c ∈ {true, f alse};
Number of nodes to be disturbed n;</p>
        <p>Perturbation intensity ϵ ∗ ∈ {1, 2, 3}.</p>
        <p>output: Adversarial Sequence x′.
1 Initialize x∗0′ = f kdcp(x0); x∗ ′ = []; δ = 0;
2 while δ &lt; ϵ ∗ do
3 N odes = N odesSearch(c, n);
4 for node ∈ N odes do
5 dir = F indDirectionbyImportance(x∗0′ );
xpert = P ert(x∗0′ , node, dir, δ );
if verif yBySW F A(xpert) then</p>
        <p>x∗ ′ .add(xpert);
• ζ is a vector of SWFA’s intermediate weights ∈ SQ.
• Each of the components of the vector ζ is 0.</p>
        <p>It represents that the intermediate status of SWFA is like
[0, 0, ..., 0] and the input exceeds the generalization limit of
our model. A lot of experimental results show that samples
with 000-status are more fragile and more likely to be
adversarial sequences. The interesting results inspire us to use
SWFA to calculate the intermediate status of the entire
sequence at each point. And we record the 000-status point as
the perturbation point.</p>
        <p>The second step is to find the appropriate perturbation
direction by the least importance denoted by Piimportance =
Si/(m × n), where superscript i represents the index of the
block and S means the number of sequences that fall into
the block. We use the insight above, if 000-status appears
in a certain perturbation direction in the SWFA computation
result, then these directions should be traversed.</p>
        <p>In the third step, the value of ϵ ∗ is iteratively increased to
conduct the abstract perturbation. The size of ϵ ∗ represents
the intensity of the perturbation. We also provide the
parameter c to control whether the perturbation is continuous.</p>
      </sec>
      <sec id="sec-4-5">
        <title>Symbolic Adversarial Sequence Generation by Reach</title>
        <p>
          ability Analysis of SWFA We adopt Linear Temporal
Logic (LTL)
          <xref ref-type="bibr" rid="ref18">(Kesten, Pnueli, and Raviv 1998)</xref>
          to define
symbolic adversarial sequence on SWFA, which is shown below:
γ ⊨ ♢
        </p>
        <p>Z
(6)
where γ denotes the intermediate state of SWFA, ⊨ denotes
”satisfies”, and ♢ denotes ”eventually”, Z denotes
000status. If γ eventually meets 000-status, it means that it is
a symbolic adversarial sequence.</p>
        <p>Finally, we use SWFA to rapidly check whether the
abstract perturbated sample xpert is an adversarial sequence.
If verifyBySWFA returns true, we add it to x∗ ′ , the list of
symbolic adversarial sequences. We then use sampling
techniques to instantiate symbolic adversarial sequences x∗ ′ into
the set of concrete adversarial sequences x′. we assume that
symbolic adversarial sequences generated by our approach
are largely successful, so it is feasible to use arbitrary
sampling methods, and we use random sampling here.</p>
        <p>The novelty of our approach is that we conduct abstract
perturbations on the symbolic space. It has two advantages.
One is that the abstract perturbation has fewer possible
iteration space, and the other is that we can generate concrete
adversarial sequences from symbolic adversarial sequences
in the batch, which brings more efficiency.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Experimental Evaluation</title>
      <p>
        To demonstrate the effectiveness and efficiency of our
approach, we conduct two experiments on well-trained LSTM
from time-series data. The first is the implementation of
the RNN-SWFA extraction algorithm using Fast k-DCP.
The second is the implementation of SWFA-based
adversarial sequence generation named AbASG. We compare it
with FGSM
        <xref ref-type="bibr" rid="ref10">(Goodfellow, Shlens, and Szegedy 2014)</xref>
        , PGD
        <xref ref-type="bibr" rid="ref22">(Madry et al. 2018)</xref>
        , and NewtonFool
        <xref ref-type="bibr" rid="ref17">(Jang, Wu, and Jha
2017)</xref>
        . All our experiments are conducted on ubuntu20.0.4
which comes with one Intel Xeon Silver 4210R CPU and
one GeForce RTX 3090 GPU. And all experiments are
implemented in Pytorch with version1.7.1 within Python3.8.2.
The datasets and the code in the experiments are available on
the Github repository: https://github.com/AbASG/AbASG.
      </p>
      <sec id="sec-5-1">
        <title>Datasets and Experimental Settings</title>
        <p>
          UCR time-series datasets
          <xref ref-type="bibr" rid="ref3">(Chen et al. 2015)</xref>
          and NGSIM
          <xref ref-type="bibr" rid="ref24">(Montanino and Punzo 2013)</xref>
          are selected for our
experiment. UCR datasets used in our experiment include
ProximalPhalanxOutlineAgeGroup (PPOAG), Chinatown (CT),
Earthquakes (EQ). To further demonstrate the effectiveness
of our approach, we use a scenario programming language
of autonomous driving, Scenic
          <xref ref-type="bibr" rid="ref8">(Fremont et al. 2020)</xref>
          ,
combined with the car simulator, Carla
          <xref ref-type="bibr" rid="ref6">(Dosovitskiy et al. 2017)</xref>
          ,
to generate the Autonomous Driving Datasets (ADD) with
the sample size of 30,000. There are three classes of vehicle
behavior, turning left, going straight, turning right, denoted
as label y. It contains several features. Detailed descriptions
of these datasets can be found in the Appendix.
        </p>
        <p>We train a 2-hidden-layer LSTM for our experiment, and
the number of hidden layer nodes is set to 1/200 of the
datasets’ size. The loss function is chosen as
CrossEntropyLoss, and Adam is our choice for the optimizer. More
detailed parameter settings of LSTM can also be found in the
Appendix.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Benchmarks and Metrics</title>
        <p>Five indicators are proposed for evaluating the SWFA’s
extraction: Accuracy of RNN (AoR), Accuracy of SWFA
(AoS), Extraction Time (ET), RNN’s Running Time (RT),
SWFA’s Running Time (ST). In AoR and AoW, the left value
denotes the accuracy of training data, while the right value
is that on test data. We use RT and ST for comparing the
computation speed between RNN and SWFA on the total
dataset.</p>
        <p>And we use Attack Success Rate (ASR) and time
consuming to evaluate the effect of our adversarial sequence
generation algorithm AbASG. The perturbation unit of
different attack algorithms are unified as δ . In FGSM and PGD,
δ = 0.006. In NewtonFool, δ = 0.01.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Experiment I</title>
        <p>We implement the extraction algorithm based on Fast
kDCP to extract SWFA from RNN on vfie different
timeseries datasets, and use the vfie indicators proposed above
to evaluate them.</p>
        <p>The reason why we design this experiment is that there
may be some questions about our abstraction algorithm like:
RQ1: Whether the extracted SWFA can really be applied
in the infinite input? RQ2: What is the accuracy of the
extracted SWFA? And what about the efficiency (the time of
extraction and prediction)? RQ3: What kind of tasks are
Fast k-DCP good at? We try to answer them through
comparing and analysis of the experiment results.</p>
        <p>Answer1: Just like what Table 1 illustrates, thanks to the
symbolic representation of the input, SWFA can work for
the infinite alphabet, as well as the test data, even if our
algorithm has never seen them before. Answer2: As can be
seen from the AoS column in Table 1, SWFA achieves high
accuracy on both training and test data of ADD, NGSIM
and EQ datasets, while its performance on PPOAG and CT
datasets is relatively not good enough since the accuracy of
the original RNNs in these tasks are only 75% and 53%
respectively. The phenomenon confirms that the accuracy
of original RNN will definitely impact the performance of
SWFA. However, we do not make any effort to improve
RNNs since our algorithm is a general approach and does
not rely on specified RNN. When it comes to efficiency, the
running time of SWFA is almost the same as that of the
original RNN on each dataset. Extraction time is validated to be
the most time-consuming part, but its results can be easily
reused. Answer3: For one thing, as mentioned above, the
original RNN is expected to be well trained. For another,
our algorithm is especially suitable for the tasks whose
classification category and input dimension are relatively low.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Experiment II</title>
        <p>
          We compare our adversarial sequence generation approach
AbASG with several baseline methods. The white-box side
includes FGSM
          <xref ref-type="bibr" rid="ref10">(Goodfellow, Shlens, and Szegedy 2014)</xref>
          and PGD
          <xref ref-type="bibr" rid="ref22">(Madry et al. 2018)</xref>
          . The black-box side:
NewtonFool
          <xref ref-type="bibr" rid="ref17">(Jang, Wu, and Jha 2017)</xref>
          .
        </p>
        <p>We attempt to find the answers for the following
questions about our adversarial sequence generation algorithm.
RQ4: How efficient our adversarial sequences algorithm is
and how can we prove it? RQ5: Are there any relation
between the accuracy of SWFA and the efficiency of our
adversarial sequence generation algorithm?</p>
        <p>To answer the questions above, we conduct
adversarial perturbation and instantiation with SWFA on the
autonomous driving dataset, comparing it with traditional
adversarial attacking algorithms. As shown in Table 2, we
record the Attack Success Rate (ASR) and the time cost of
different algorithms at different perturbation intensities. The
perturbation unit of different algorithms is unified as δ .
Average Generation Time In particular, when we
compare the generation time of adversarial examples, as shown
in Table 2, we did not take extra consideration of SWFA
extraction time. The reason is as follows:</p>
        <p>Taverage =
λ ext + n · x
n
= λ ext + x
n
(7)
λ ext denotes the SWFA extraction time, Taverage denotes
the average time to find a single adversarial example, x
denotes generation time of single adversarial example, n
denotes the number of adversarial examples generated. We
have the property that n → ∞, Taverage → x, which means
that when n becomes larger, the cost of building SWFA will
be diluted until it approaches zero.</p>
        <p>Finally, we can get the following answers from the
experiment. Answer4: We use the ASR and generation times
to verify the efficiency of our algorithm on the autonomous
driving dataset. The former illustrates that our approach has
an outstanding success rate, which respectively has 81.11%,
94.56% and 112.92% improvements compared with
NewtonFool and is much better than the white-box attacking
methods, despite the small perturbation ranges makes it even
more difficult to achieve. The latter shows that our approach
is 1.44 times faster than NewtonFool at most with the same
perturbation and an even higher level of ASR. Not to
mention the FGSM and PGD who are no longer fairly
comparable in generation times to our algorithm because of the lower
performance in this case. Answer5: Since SWFA is an
abstract expression of RNN, if the accuracy of RNN is low, the
guidance ability of extracted SWFA for perturbations will be
limited, which will eventually reduce the efficiency of
adversarial sequence generation. And the reverse is also true.</p>
        <p>As shown in Fig 5, we use diverse algorithms to
generate adversarial sequences on ADD. These colored point
sets constitute adversarial trajectories. Compared with other
algorithms, the adversarial trajectory generated by our
approach has lower perturbation, which is harder for people to
recognize it. The adversarial trajectories generated by other
algorithms are not in line with the characteristics of vehicle
trajectories. Therefore, our approach can achieve a better
attack success rate under the condition of small perturbation.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Concluding Remarks</title>
      <p>In this paper, we have proposed an efficient adversarial
sequence generation approach for RNN by SWFA abstraction.
The approach is applicable to generate covert adversarial
sequences with high efficiency. We implemented the approach
by the Fast k-DCP symbolic extraction algorithm
reducing the complexity of perturbation. The main advantage of
our approach is that the symbolic extraction technique
constructs the abstract model of RNN. Besides, we set
parameters to keep the perturbation as much as possible within the
human-invisible range. Compared with the baseline of the
adversarial example generation algorithms, our approach is
more efficient with the spatio-temporal sequential dataset.
For future work, we would further optimize our approach,
adapting to large-class sequential data. Multi-label
classiifcation tasks with too many categories may lead to lower
accuracy of SWFA, which impacts the efficiency of
adversarial sequence generation. Moreover, we will explore more
detailed comparisons with adversarial example generation
algorithms on various datasets and investigate the
reachability analysis of SWFA.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>Financial support for this work, provided by the National
Natural Science Foundation of China under Grant No.
61972153, the Key projects of the Ministry of Science and
Technology under No. 2020AAA0107800, is gratefully
acknowledged.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Ayache</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Eyraud, R.; and
          <string-name>
            <surname>Goudian</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Explaining Black Boxes on Sequential Data using Weighted Automata</article-title>
          . In
          <string-name>
            <surname>Unold</surname>
          </string-name>
          , O.;
          <string-name>
            <surname>Dyrka</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Wieczorek</surname>
          </string-name>
          , W., eds.,
          <source>Proceedings of the 14th International Conference on Grammatical Inference, ICGI</source>
          <year>2018</year>
          , Wrocław, Poland, September 5-
          <issue>7</issue>
          ,
          <year>2018</year>
          , volume
          <volume>93</volume>
          <source>of Proceedings of Machine Learning Research</source>
          ,
          <volume>81</volume>
          -
          <fpage>103</fpage>
          . PMLR.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2017.
          <article-title>ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models</article-title>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Keogh</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Begum</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bagnall</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mueen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and Batista,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>The UCR Time Series Classification Archive</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Henzinger</surname>
            ,
            <given-names>T. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Veith</surname>
          </string-name>
          , H.; and Bloem, R., eds.
          <source>2018. Handbook of Model Checking. Springer. ISBN 978-3-319-10574-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Dong</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Zhang,
          <string-name>
            <given-names>Y.</given-names>
            ;
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ;
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ; and
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. S.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Analyzing Recurrent Neural Network by Probabilistic Abstraction</article-title>
          . CoRR, abs/
          <year>1909</year>
          .10023.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Dosovitskiy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ros</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Codevilla</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <article-title>Lo´pez, A. M.;</article-title>
          and
          <string-name>
            <surname>Koltun</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>CARLA: An Open Urban Driving Simulator</article-title>
          .
          <source>In 1st Annual Conference on Robot Learning</source>
          ,
          <source>CoRL</source>
          <year>2017</year>
          ,
          <string-name>
            <given-names>Mountain</given-names>
            <surname>View</surname>
          </string-name>
          , California, USA, November
          <volume>13</volume>
          -
          <issue>15</issue>
          ,
          <year>2017</year>
          , Proceedings, volume
          <volume>78</volume>
          <source>of Proceedings of Machine Learning Research</source>
          ,
          <volume>1</volume>
          -
          <fpage>16</fpage>
          . PMLR.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ma</surname>
          </string-name>
          , L.;
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>A Quantitative Analysis Framework for Recurrent Neural Network</article-title>
          .
          <source>In 34th IEEE/ACM International Conference on Automated Software Engineering, ASE</source>
          <year>2019</year>
          , San Diego, CA, USA, November
          <volume>11</volume>
          -
          <issue>15</issue>
          ,
          <year>2019</year>
          ,
          <fpage>1062</fpage>
          -
          <lpage>1065</lpage>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Fremont</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dreossi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ghosh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Yue</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sangiovanni-Vincentelli</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Seshia</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>Scenic: A Language for Scenario Specification and Data Generation</article-title>
          . CoRR, abs/
          <year>2010</year>
          .06580.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ; Shlens, J.; and
          <string-name>
            <surname>Szegedy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Explaining and Harnessing Adversarial Examples</article-title>
          .
          <source>arXiv 1412</source>
          .6572.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I. J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Shlens</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Szegedy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Explaining and harnessing adversarial examples</article-title>
          .
          <source>In ICML.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          2017.
          <article-title>DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks</article-title>
          . CoRR, abs/1710.00486.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Gorrieri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2017</year>
          .
          <source>Labeled Transition Systems</source>
          ,
          <volume>15</volume>
          -
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          Cham: Springer International Publishing.
          <source>ISBN 978-3-319- 55559-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kroening</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ruan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ; Sharp,
          <string-name>
            <surname>J.</surname>
          </string-name>
          ; Sun,
          <string-name>
            <given-names>Y.</given-names>
            ;
            <surname>Thamo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ;
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Comput. Sci. Rev.</surname>
          </string-name>
          ,
          <volume>37</volume>
          :
          <fpage>100270</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Jang</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Jha</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning</article-title>
          .
          <source>In Proceedings of the 33rd Annual Computer Security Applications Conference</source>
          , Orlando, FL, USA, December 4-
          <issue>8</issue>
          ,
          <year>2017</year>
          ,
          <fpage>262</fpage>
          -
          <lpage>277</lpage>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Kesten</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pnueli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Raviv</surname>
            ,
            <given-names>L. O.</given-names>
          </string-name>
          <year>1998</year>
          .
          <article-title>Algorithmic verification of linear temporal logic specifications</article-title>
          .
          <source>In Lecture Notes in Computer Science.</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Kurakin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I.;</given-names>
          </string-name>
          and
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Adversarial examples in the physical world</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Gu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Di</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>An LSTM-Based Autonomous Driving Model Using Waymo Open Dataset</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>CoRR</surname>
          </string-name>
          , abs/
          <year>2002</year>
          .05878.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Madry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Makelov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Tsipras</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Vladu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Towards Deep Learning Models Resistant to Adversarial Attacks</article-title>
          .
          <source>In 6th International Conference on Learning Representations, ICLR</source>
          <year>2018</year>
          , Vancouver, BC, Canada, April 30 - May 3,
          <year>2018</year>
          , Conference Track Proceedings. OpenReview.net.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Michalenko</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Verma</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Baraniuk</surname>
            ,
            <given-names>R. G.</given-names>
          </string-name>
          ; Chaudhuri,
          <string-name>
            <given-names>S.</given-names>
            ; and
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. B.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks</article-title>
          .
          <source>In 7th International Conference on Learning Representations, ICLR</source>
          <year>2019</year>
          ,
          <article-title>New Orleans</article-title>
          , LA, USA, May 6-
          <issue>9</issue>
          ,
          <year>2019</year>
          . OpenReview.net.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Montanino</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Punzo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Making NGSIM Data Usable for Studies on Traffic Flow Theory: Multistep Method for Vehicle Trajectory Reconstruction</article-title>
          .
          <source>Transportation Research Record</source>
          ,
          <volume>2390</volume>
          (
          <issue>1</issue>
          ):
          <fpage>99</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Black-box adversarial attacks by manipulating image attributes</article-title>
          .
          <source>Inf. Sci.</source>
          ,
          <volume>550</volume>
          :
          <fpage>285</fpage>
          -
          <lpage>296</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Weiss</surname>
            , G.; Goldberg,
            <given-names>Y.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Yahav</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples</article-title>
          . In Dy, J. G.; and
          <string-name>
            <surname>Krause</surname>
          </string-name>
          , A., eds.,
          <source>Proceedings of the 35th International Conference on Machine Learning</source>
          ,
          <string-name>
            <surname>ICML</surname>
          </string-name>
          <year>2018</year>
          , Stockholmsma¨ssan, Stockholm, Sweden,
          <source>July 10-15</source>
          ,
          <year>2018</year>
          , volume
          <volume>80</volume>
          <source>of Proceedings of Machine Learning Research</source>
          ,
          <volume>5244</volume>
          -
          <fpage>5253</fpage>
          . PMLR.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          2021.
          <article-title>Decision-Guided Weighted Automata Extraction from Recurrent Neural Networks</article-title>
          .
          <source>Proceedings of the AAAI Conference on Artificial Intelligence</source>
          ,
          <volume>35</volume>
          (
          <issue>13</issue>
          ):
          <fpage>11699</fpage>
          -
          <lpage>11707</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Shen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2020</year>
          . Artificial Intelligence in Surgery. CoRR, abs/
          <year>2001</year>
          .00627.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>