<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Self-learning assessment of communication in distributed embedded systems - a feasibility study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Falk Langer</string-name>
          <email>falk.langer@esk.fraunhofer.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erik Oswlad</string-name>
          <email>erik.oswald@esk.fraunhofer.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Fraunhofer ESK</institution>
          ,
          <addr-line>Hansastrasse 32, Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>95</fpage>
      <lpage>105</lpage>
      <abstract>
        <p>This paper addresses the problem of evaluating the communication behavior of cyber physical systems. An important problem for the validation of the interaction in the distributed system is missing, wrong or incomplete specification. In this paper, the application of a new approach for assessing the communication behavior based on reference traces is presented and evaluated. The benefit of the approach is that it works automatically, with low additional effort and without using any specification. This paper provides a use case in conjunction with a feasibility study to investigate the applicability of a selflearning anomaly detection methodology. The data of the feasibility study are created by applying the described anomaly detection within a real vehicle network.</p>
      </abstract>
      <kwd-group>
        <kwd>embedded system validation</kwd>
        <kwd>testing procedures</kwd>
        <kwd>network trace analysis</kwd>
        <kwd>self-learning test methods</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>This paper focuses on test and validation of the communication behavior in cyber
physical systems (CPS). On such systems with highly distributed functionality like it
can be found in modern car’s electronics, the communication behavior is an important
aspect on system validation. At field operational test it is important to analyze the
network traffic in a fully assembled car. Even if all single electronic control units are
tested exhaustively, a significant portion of remaining bugs resulting in errors or
malfunction is lately found at real driving or field operational test.</p>
      <p>
        The most important problem of ensuring the correct interaction of functions in CPS
at system-level is missing, wrong or incomplete specification (compare [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]).
There are many works of research in progress that tries to improve the process of
creating system specification, with the goal of building better test cases for validating
the communication on system level. Nevertheless it is still an extensive process to get
sufficient test models.
      </p>
      <p>
        Because network traffic represents the internal behavior of a distributed system, its
analysis can help to detect possible bugs earlier and faster. But especially on system
level test it is not easy to rate about the correctness of communication at the network.
In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], a new approach for building observer models to evaluate communication
behavior automatically, with low additional effort and without using any specification
was introduced. There it was shown that is possible to infer meaningful automata
from a network-reference trace. To check the applicability of the proposed approach
this paper provides a feasibility study that shows the integration of the methodology
proposed in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] in an existing test scenario and examines the quality of the automata
for detecting bugs within the communication behavior.
      </p>
      <p>The paper is structured as follows. In chapter 2 the use case for the proposed
methodology and its integration in the test process explained. Chapter 3 provides the
previous work and basically describes the new self-learning methodology. In chapter
4 the quality measurements for proofing the feasibility are defined and calculated. The
paper closes with chapter 5 conclusion and future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>INTEGRATION TO FIELD TEST</title>
      <p>The new approach presented in this paper shall help to find bugs in field
operational tests faster. For this reason an important aspect of the proposed solution is
the integration in the established testing and validation process for the car’s electronic
infrastructure. To get an overview about the testing process of this distributed but
even closed system in the following the basic testing steps for such a distributed
system are described.</p>
      <p>As in every software development cycle the first test stage are unit tests and
basically the second stage are integration tests. On integration tests, the different
applications belonging to an electronic control unit (ECU) are integrated and the basic
functionality required from this ECU is tested. The next test level in testing can be
characterized as system validation, often it is called system test. This is a test on
system level, where the interaction of different ECUs is tested.</p>
      <p>
        Because of the strong interaction of the embedded applications of a car with its
environment, field operational tests are commonly finalizes the validation. For
software realized functions this kind of test became important at least with the
introduction of advanced driver assistant systems which need to be evaluated in real
driving tests (compare [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]). Within this background it is a well-established practice
that even for the cars electronic infrastructure an endurance test as final acceptance
test is executed. This endurance test is performed within real driving field operational
tests.
      </p>
      <p>Since the distributed network of ECUs inside the car is a closed system, in most
cases it is not possible to control the endurance test on network level. In case of field
operational test, mostly normal driving tasks are executed by the test drivers. At this
testing level the test driver is only able to detect software bugs that lead to a
noticeable malfunction of the car and its components to the driver. Because this is a
very limited perspective to the executed software system, in most cases the network
traffic from inside the car is recorded at test drives. The recorded network traffic
provides information about the internal behavior of the cars electronic infrastructure
at test drives. These traces are beside the voting of the test driver, the only source of
information for validating the behavior of the cars electronic.</p>
      <p>Only when these test drives are executed without a detected malfunction over a
dedicated distance of kilometers the electronic system of the tested car passed the
final acceptance test. There are still a lot of remaining bugs that are lately found
within these tests with fully assembled car. It did not surprise that there could be
easily summate a few million kilometers until all remaining bugs are found, fixed and
a test period can be successful completed.</p>
      <p>There are two ways to shorten this expensive and time consuming procedure to get
a fault free tests drive period within the endurance test. The first one is to reduce the
number of remaining bugs that can cause malfunction at test drives. This require
better testing methodologies at earlier development phases. Or secondly try to identify
possible bugs faster and more efficient within the test drives.</p>
      <p>
        The first one is questionless methodically the clean way. But one of the basic
problems in praxis is missing, wrong or incomplete specification of system
requirements. This problem becomes commonly relevant with a high number of
interacting functions like it can be found highly distributed functionality. To solve this
problems there are many research in progress (e.g. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ])
      </p>
      <p>This paper provides a new solution for the second way, the faster identification of
potential bugs within test drives. This solution basically tries to identify changes in
the system behavior by comparing it with a reference trace. Thereby the probability
that this new behavior which is not represented within the reference trace is caused by
a bug seems to be high. For this reason a self-learning methodology for an automatic
evaluation of the recorded network traffic from field operational tests is presented and
evaluated in this paper.
3</p>
    </sec>
    <sec id="sec-3">
      <title>PREVIOUS WORK</title>
      <p>This chapter provides an overview of the author’s previous work that motivates
this new approach and provides the foundation and experiments for the proposed
selflearning method.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the idea of extracting dependency models from qualified communication
behavior to rate the communication of further test cases, was motivated. With this
idea a new methodology for assessing the communication behavior in regions with
incomplete or missing specification should be better testable.
      </p>
      <p>Therefore the goal is the construction of a method that allows a qualitative
comparison between communication that is presented within a reference trace and
newly recorded traces. The essential outcome of the proposed procedure is the
awareness, that the newly recorded network trace represents a new system behavior,
which was not represented within the reference trace. If such a behavior is recognized,
the method outputs a trigger or some equivalent information to the tester. At this point
two potential expectations about the tested network behavior can be made: 1) A newly
implemented or just jet not observed behavior was found, or 2) A bug in in
communication behavior is detected. Just at this point a system expert has to decide if
the proposed method detects case 1) or 2). Surely it is not possible to detect bugs,
which are already within the reference trace included, but if no other tests detect these
bugs and these bugs did not lead to malfunction, it is not sure if it is a bug or just
unspecified behavior.</p>
      <p>
        In conjunction to this idea, starting with [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], the learning problem of extracting
behavior models from traces was considered. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] it was pointed out that basically
within a recorded network trace no sequences boundaries are visible. Instead a trace is
one single but very long sequence, which is a challenge for most learning algorithms.
With a first simple system hypothesis, where a trace is a stream of events that can be
generated from a finite state automaton, the applicability of an artificial neural
network was examined within [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Because of the unsatisfactory false alarm rate and the high performance utilization
of the neural networks, other learning algorithms were looked for. The Angluin L*
algorithm which is able to generate finite state automata (compare [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]) seems
to be a good candidate for inferring reliable dependency models. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] the adaption of
the Angluin L* learning algorithm to learning automata from network streams was
shown. The result of the learning process from L* are acceptance automata. It can be
shown that these automata describe very accurate the behavior of a given reference
sequence of events, without false alarms and with a maximum on inferable
dependencies. But at the evaluation with a real car network trace even the L*
Algorithm fails to infer reasonable automata. The finding has to be made, that the
learning problem is too much complex for the algorithm.
      </p>
      <p>
        A solution for this problem was provided in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. There a methodology is provided
that reduces the complexity of the learning task, by separating sub-traces which
describe independent execution graphs. With this solution it is possible to infer an
acceptance automaton for each sub-trace that describes the behavior of the events in
this sub-trace satisfactory.
      </p>
      <p>
        The identification of independent execution graphs within the network trace was
undertaken in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] with a new clustering approach based on a spectral analysis. It
could be shown that there is a high probability that events with similar behavior in
time belonging to the same execution graph. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] the evaluation of the clustering
methodology was done by the application to a real car network trace taken from a
controller area network (CAN) that connects the powertrain ECUs. At the result
approximately 70% of the behavior of the CAN trace is covered by the inferred
acceptance automata. This research results show that it is possible to set up an
unsupervised self-learning methodology that infers behavior models from a network
trace without the usage of a specification.
      </p>
      <p>
        If the reference trace represents the normal behavior of a system the inferred
automata should accept this normal behavior. If a newly recoded trace holds even a
normal behavior of the system, the automata should accept this trace. If this trace
holds another behavior it should be rejected by the automata. If the inferred automata
do not accept a newly recorded trace, this trace potentially holds a behavior that is out
of the norm. This can be called an out of norm (OoN) behavior (comp. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]) of the
trace. If an OoN-behavior is detected the tester will be informed by a OoN-trigger.
The above explained methodology for inferring acceptance automata to use them to
evaluate newly recorded traces will be called OoN-detection in the following.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>QUALITY MEASUREMENTS</title>
      <p>Even though it is now possible to extract acceptance automata that describe the
behavior of the reference trace satisfactory, the usage of these automata for evaluating
other traces is still not proofed. For applying this methodology to the introduced use
case of finding bugs within network traces from field operational test, it is necessary
to proof the quality of the inferred automata in conjunction to that use case. That
means, that a more detailed analysis of the expectable rate of false alarms and the
percentage of bugs that are detectable needs to be done.</p>
      <p>In the following the basic characteristics for determining the quality and usability
of the OoN-detection are explained, defined and estimated. With the usage of real
CAN-traces the feasibility of the proposed OoN-detection in an automotive test
scenario is examined.
4.1</p>
      <sec id="sec-4-1">
        <title>The coverage criteria</title>
        <p>The first criterion for a test mechanism is the test coverage. In the case of the
proposed self-learning approach the definition of coverage needs to be adopted to the
visible system behavior. The visible system behavior in this case is completely
included within the reference trace. The coverage needs to be calculated in relation to
the reference trace. A trace basically consists of a set of events . A sequence of
events is defined by . In case of a network trace a
sequence defines the complete recorded amount of events in the given period, which
can be easily more than 106 events.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] it was shown that is not possible to infer for all events acceptance
automata. That leads to the effect that only a sub set of the elements of are
mentioned by the OoN-detection. That subset of leads also to a subset of the trace
and of sequence , because some single events of are missing
. This results in two different coverage measurements. These are
the relation the event coverage and the trace coverage with:
event coverage
trace coverage
⁄
(1)
(2)
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>The classification criteria</title>
        <p>The essential criterion for classification is the rating if the classified belongs to a
specific class of object or not. In case of the OoN-detection the trace needs to be
classified in the two decision classes: (1) the trace has the same behavior as the
reference trace (2) the trace has a different behavior as the reference trace. To rate if
the decision of the OoN-detector is correct there are for different results possible:
1. True positive (tp) :The trace was classified to has the same behavior as the
reference trace and this was correct
2. False positive ( ):The trace was classified to has the same behavior as the
reference trace, but this was not correct
3. True negative ( ):The trace was classified to has a different behavior as the
reference trace and this was correct
4. False negative ( ):The trace was classified to has a different behavior as the
reference trace, but this was not correct
false alarm rate
rate of detectable
anomalies</p>
        <p>With this attributes the two important relational criteria, the false alarm rate
and the rate of detectable anomalies , can be calculated.
(3)
(4)</p>
      </sec>
      <sec id="sec-4-3">
        <title>Estimating the rate of detectable anomalies.</title>
        <p>The rate of detectable anomalies determines the percentage the method find real
OoN deviations in a trace. The optimal way to calculate this rate would be to have
traces that holds known bugs, that can be presented to the OoN-detection. But for this
kind of bug detection no public data sets are available.</p>
        <p>A practicable method for estimating the rate of detectable anomalies is the
instrumentation of a trace which has no bugs inside and is completely accepted by the
inferred automata. In the proposed use case this needs to be the reference trace that
was used to learn the acceptance automata. For estimating the rate of detectable
anomalies the reference trace was instrumented in three different ways: (1) a
randomly selected event is deleted from the trace (2) a randomly selected event is
duplicated and (3) a randomly selected event is replaced by another randomly selected
event.</p>
      </sec>
      <sec id="sec-4-4">
        <title>Estimating the false alarm rate.</title>
        <p>The estimation of the false alarm rate turns out to be more complicated. Since the
reference trace describes not all possible behavior, the OoN-detection will find
necessarily new behavior within the test traces. But without a deeper knowledge of
the system, it is not possible to decide if the OoN-detection did in fact identify new
behavior or not.</p>
        <p>Because a false alarm is not directly identifiable it would be helpful to have a look
at the causes of false alarms. If it is clear what circumstances can cause a false alarm,
it should be possible to estimate the count of false alarm otherwise.</p>
        <p>The inferred acceptance automata of the OoN-detection mechanism are not based
on probabilistic decisions like neural networks or Markov chains. Therefore it can be
shown that the inferred acceptance automata do always evaluate the reference trace
correct and in the same way. But these acceptance automata can be overfitted that
they only accept the behavior of the reference trace. An overfitting in that case means
that an acceptance automaton evaluates events that are not correlated to each other.
Only if the events are correlated, they can have a dedicated behavior that is
reproducible and can therefore declared as normal. If an inferred acceptance
automaton is overfitted it accepts only the reference trace but no other traces. It can be
pointed out that the most significant part of false alarms will be caused by overfitted
automata. Therefore the ascertainment of the number of overfitted automata will give
a first impression of the false alarm rate.</p>
        <p>It is a strong indication of overfitting, if an inferred acceptance automaton rejects
all the tested traces except the reference trace. To estimate the false alarm rate, one
test trace is evaluated by the inferred acceptance automata. All automata that reject
this trace are taken in the next step to evaluate the other test traces. If an automata
reject all test traces it is very likely overfitted.
5</p>
        <p>Results of feasibility study
The best way to examine the feasibly of the described OoN-detection mechanism
would be the evaluation with real network data that have known anomalies. But for
the proposed use case, for the evaluation of network data from a cars network, there
are no such data sets available.</p>
        <p>The evaluation results presented in this chapter are generated by using real network
data that are taken from a powertrain CAN of a car in series production. Because of
the usage of a proven in use product, it is expected that these network data do not
have any faults or anomalies.</p>
      </sec>
      <sec id="sec-4-5">
        <title>The evaluation data set.</title>
        <p>
          The starting point for the evaluation is the reference trace that was used in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] to
infer acceptance automata. This reference trace has a length of 12.80 minutes with a
sequence length of 2,5 *106 events that is described by a set of 7172 different events.
Since this reference trace is recorded at a car in series production it is estimated that it
has no bugs inside. From this reference trace 2,848 different acceptance automata was
inferred with the methodology explained in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>For the quality measurement four test traces on the same car at different driving
scenarios are recorded. The recording time of these traces is in the range of 14.60 min
to 2.85 min and has in total approximately 26 min (see Table 1).</p>
      </sec>
      <sec id="sec-4-6">
        <title>False alarm rate.</title>
        <p>For estimating the false alarm rate Trace 1 is evaluated by the 2,848 acceptance
automata. This trace is rejected by 1,943 automata as shown in the first column of
Fig. 1. When these automata are checked for overfitting with trace 2 a number
approximately seems to be overfitted because they reject trace 1 and
2. If these overfitted automata are excluded 181 remaining automata generate an
OoN-trigger. If these remaining automata are additionally checked with trace 3 and 4
a number of 114 automata are left that seems to be not overfitted. That means with the
additionally plausibility check with only 2 trace a remaining false alarm rate of
can be achieved (compare Fig. 1).</p>
        <p>2000
a 1800
t
jce 1600
re 1400
t
a 1200
h
ta 1000
ta ce 800
tom tra 600
au 400
fo 200
reb 0
m
u
n
1943
90%
181
40%</p>
        <p>114
number of automata that automata that
automata that reject trace 1 and reject trace 1 and
reject trace 1 not trace 2 not trace 2,3,4</p>
      </sec>
      <sec id="sec-4-7">
        <title>Rate of detectable anomalies.</title>
        <p>The experiments for estimating the rate of detectable anomalies are executed by
instrumenting the reference trace. The results are shown in Table 2. In the experiment
approximately 0.1 % of the trace was modified, that leads to about 2.500 injected
anomalies.</p>
      </sec>
      <sec id="sec-4-8">
        <title>Coverage.</title>
        <p>The 2.848 initial inferred automata cover approximately 80 % of the trace (see
Fig. 2 with ). When the overfitted automata are excluded a
number of 770 acceptance automata are usable for OoN-detection. These do have still
a coverage of approximately 45 % (compare Fig. 2 right column).</p>
      </sec>
      <sec id="sec-4-9">
        <title>Coverage OoN-detection – normal and not normal behavior.</title>
        <p>After the overfitting analysis a number of 770 automata are usable for
OoNdetection. If the OoN-detection mechanism is applied to these automata for the given
reference traces 621 of these automata do accept all four test traces. A number of 149
automata rejects at least one test traces. In Fig. 3 the resulting coverage of the
detected normal behavior and the detected not normal behavior is shown.</p>
        <p>100%
d
taan taa 80%</p>
        <p>m
taom tauo
au le 60%</p>
        <p>b
d a
e s
r
r u
e f
ifn eo 40%
feo tgan
g a
rvaeo rcep 20%
C</p>
        <p>0%
100%
N
oO 80%
d
n
a
r
ivo 60%
a
eh re
b g
la irg 40%
rm T
o
n
feo 20%
g
a
r
e
vCo 0%
trace coverage
event coverage
usable automata (%)
45%
26%</p>
        <p>35%
14%</p>
        <p>11%
non overfitted
automata
normal behavior
19%
9%</p>
        <p>3%
positive
OoN</p>
        <p>Trigger
trace coverage
event coverage
usable automata (%)</p>
        <p>
          The results provide a first impression of the applicability of the proposed
selflearning OoN-detection. The evaluation based on real CAN-communication is a first
step for approving the proposed OoN-detection. The most critical point is of course
the false alarm rate. Although the estimation of this rate tends to be difficult task, the
results provide an evidence that the false alarm rate tends to be in the range of 40 %.
In comparison to automatic test methods like code checker this seems to be acceptable
(comp. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]). A very good result was reached by the detection of anomalies with a
range of about 90 %. A comparison of the coverage is difficult, but for an
unsupervised self-learning approach a system coverage of about 45 % seems to be an
excellent result.
6
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>CONCLUSION AND FUTURE WORK</title>
      <p>This paper explains and evaluates an application use case for an self-learning
OoNdetection that was established in prior work. Since the proposed OoN-detection is a
new approach in the area of system validation, this paper provides a basic quality
measurement for this automatic self-learning approach.</p>
      <p>It could be shown that the proposed self-learning OoN-detection is potentially
usable to detect anomalies in the communication behavior in comparison to a
reference trace. Essentially for the critical false alarm rate the usage in a testing
environment with approximately 40 % seems to be acceptable.</p>
      <p>The provided evaluation results are recorded from a car in series production that
has potentially no bugs inside. For this reason a further investigation within a real
testing environment needs to done to learn more about the expected outcome and
usability of the proposed self-learning OoN-detection for CPS.
7</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Angluin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1987</year>
          .
          <article-title>Learning regular sets from queries and counterexamples</article-title>
          .
          <source>Information and computation 75</source>
          ,
          <fpage>87</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bollig</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Katoen</surname>
            ,
            <given-names>J.-P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kern</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Leucker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Replaying Play in and Play out: Synthesis of Design Models from Scenarios by Learning</article-title>
          .
          <source>In Proceedings of the 13th International Conference on Tools and Algorithms for Construction and Analysis of Systems. Lecture Notes in Computer Science</source>
          . Springer, Braga, Portugal,
          <fpage>435</fpage>
          -
          <lpage>450</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Drabek</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pramsohler</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Weiss</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Interface Verification Using Executable Reference Models: An Application in the Automotive Infotainment</article-title>
          .
          <source>In Proceedings of the 6th International Workshop on Model Based Architecting and Construction of Embedded Systems</source>
          , Miami, Florida, USA,
          <volume>7</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Ebert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Embedded Software: Facts, Figures, and Future</article-title>
          .
          <source>IEEE Computer 42</source>
          ,
          <issue>4</issue>
          ,
          <fpage>42</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Goralczyk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaeufele</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Radusch</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Logging Design for Vehicle Communication Field Operational Tests</article-title>
          .
          <source>In FAST-Zero'11 Proceedings</source>
          , 1-
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kremenek</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ashcraft</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Engler</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>Correlation exploitation in error ranking</article-title>
          .
          <source>SIGSOFT Softw. Eng. Notes 29</source>
          ,
          <issue>6</issue>
          ,
          <fpage>83</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Langer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bertulies</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hoffmann</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Self Learning Anomaly Detection for Embedded Safety Critical Systems. In Schriftenreihe des Instituts für Angewandte Informatik, Automatisierungstechnik am Karlsruher Institut für Technologie</article-title>
          .
          <source>KIT Scientific Publishing</source>
          ,
          <fpage>31</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Langer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eilers</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Knorr</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Fault detection in discrete event based distributed systems by forecasting message sequences with neural networks</article-title>
          .
          <source>In KI 2009: Advances in Artificial Intelligence</source>
          . Springer,
          <fpage>411</fpage>
          -
          <lpage>418</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Langer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Prehofer</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Anomaly detection in embedded safety critical software</article-title>
          .
          <source>In International Workshop on Principles of Diagnosis (DX)</source>
          ,
          <fpage>163</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Langer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Oswald</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Using Reference Traces for Validation of Communication in Embedded Systems</article-title>
          .
          <source>In ICONS 2014, The Ninth International Conference on Systems</source>
          ,
          <volume>203</volume>
          -
          <fpage>208</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>R. R.</given-names>
          </string-name>
          <year>1993</year>
          .
          <article-title>Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems</article-title>
          .
          <source>In Proceedings of the IEEE International Symposium on Requirements Engineering</source>
          ,
          <fpage>126</fpage>
          ‐
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Peti</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Obermaisser</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kopetz</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Out-of-norm assertions [diagnostic mechanism]</article-title>
          .
          <source>In Real Time and Embedded Technology and Applications Symposium</source>
          ,
          <year>2005</year>
          .
          <source>RTAS</source>
          <year>2005</year>
          .
          <article-title>11th IEEE</article-title>
          . IEEE,
          <fpage>280</fpage>
          -
          <lpage>291</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jonsson</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leucker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Saksena</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Insights to Angluin s Learning</article-title>
          .
          <source>Electronic Notes in Theoretical Computer Science</source>
          <volume>118</volume>
          ,
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>