<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Systematic Identi cation of Security Tests Based on Security Risk Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jan Stijohann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jorge Cuellar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>(Supervisor)</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Siemens AG</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Today's security testing is not systematic much less standardized. In particular, there are no clearly de ned criteria for selecting relevant tests. Thus di erent analysts come to di erent results and sound quality assurance is hardly possible. Literature suggests basing the choice and prioritization of tests on risk considerations but lacks a systematic approach for a traceable transition from abstract and business-oriented risk analysis into the concrete and technical security testing world. We aim at bridging this gap in two steps: The rst one bridges between highlevel and non-technical \business worst case scenarios" and less abstract \technical threat scenarios" using a technical description of the system and a systematic STRIDE-based elicitation approach. The second is a rule-based step that maps technical thread scenario to \test types", that is, to classes of tests that need to be adapted to the particular system under validation. Our method provides traceability for the choice for security tests and a standardized minimum quality assurance level.</p>
      </abstract>
      <kwd-group>
        <kwd>Security</kwd>
        <kwd>risk driven testing</kwd>
        <kwd>risk analysis</kwd>
        <kwd>threat modeling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Today's security testing is not a systematic or standardized process: a tester
has neither clearly de ned criteria for choosing or prioritizing possible tests,
nor when to stop. The results of such security testing sessions depend on the
tester's know-how, experience, intuition, and luck. Thus di erent people come
to di erent results and a sound quality assurance is hardly possible.</p>
      <p>
        Several standards and scienti c papers suggest to base the choice and
prioritization of tests on risk considerations. Yet only few explain how this can be done
on a technical level, e.g. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and there is a lack of concrete guidelines on how to
systematically select security tests based on business -oriented risk analysis [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>However, if the risk analysis remains at a high level, it misses essential risk
factors and is of little help for a subsequent security testing. If, on the other
hand, it includes technical threats, such as \Bu er Over ow" or \SQL
Injection", those choices seem arbitrarily taken, that is, independently of the unique,
speci c characteristics of the system under veri cation (SUV). It is not clear
why they concern this particular SUV, nor why other similar threats with the
same possible impact are ignored. For instance, in case of the bu er over ow, an
analyst may simply not consider \format string vulnerabilities", \double frees"
or \null pointers", just because he is not aware of their existence, while \bu er
over ow" is a common vulnerability that he knows.</p>
      <p>Objectives This paper aims at bridging this gap between risk analysis and
security testing, o ering a systematic approach for a traceable transition from
abstract, business-oriented risk analysis to the concrete and technical security
testing world. The process must start from concrete technical information about
the SUV and result in a set of tests that may still require some adaptation, but
are manageable by experienced security testing experts. Moreover, it should be
applicable in real-world industrial environments and provide a clear bene t for
the security analyst, in terms of e ort assurance, and transparency.</p>
      <p>Outline of our Solution We propose a two-step-process for a systematic
identi cation of security tests based on risk analysis:
1. The rst one goes from high-level and non-technical \business worst case
scenarios" to less abstract \technical threat scenarios" via a systematic
STRIDE 1-based elicitation approach. This step requires a su ciently
technical security overview, in the form of a Data Flow Diagram (DFD) of the
SUV, annotated with security relevant information.
2. The second step derives security tests from the technical threat scenarios. It
guides the analyst with rules that map patterns in the DFD to \test types",
which are test templates which need to be instantiated, that is, adapted to
the implementation, con guration and state of the SUV. In addition to those
re-usable mapping rules, the selection of appropriate tests is supported by
organizing the test types in a \test library" and tagging the entries according
to the tested system element, the tested security property, the technology,
and the sophistication of the test.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Current Work</title>
      <sec id="sec-2-1">
        <title>Technical System Description</title>
        <p>The technical system description captures and structures the security relevant
technical aspects of the SUV in a comprehensive and systematic way. The
resulting security overview is crucial for the transition from risk analysis results to
security tests (see Section 2.2 and 2.3) and it provides the technical system
information needed to identify and instantiate appropriate test types. The security
overview should have the following properties:</p>
        <p>
          Created by the Security Analysis Team. Design documents are often
not suitable as a security overview as they tend to be overwhelming, out-dated,
incorrect, incomplete, or they miss relevant security information. It is therefore
judicious to let the risk analysis team create its own suitable, that includes
1 The acronym is introduced as part of Microsoft's STRIDE threat modelling [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]; it
stands for spoo ng, tampering, repudiation, information disclosure, denial of service,
and elevation of privilege.
su ciently technical, security overview. Besides studying existing documents,
the security analysts should consider interviews and white-board sessions with
developers, as well as tool-supported approaches (as explained later on).
        </p>
        <p>Su ciently Technical. Examples of security relevant technical
information that a security overview should consider in detail are: Data ow related
aspects (including interfaces and trust boundaries), security controls (such as
authentication, encryption, or input validation), sensitive information that must
be protected, relevant design properties (protocols, sanitization/encodings, le
permissions process privileges, etc.).</p>
        <p>Generated with Tool Support. Manually creating a correct and su
ciently technical security overview can become time consuming and tedious.
This is especially the case for complex and dynamically evolving software where
no one has the complete overview. Reconnaissance tools, such as port scanners,
network sni ers or static and dynamic analysis tools, can help to obtain and
keep the overview. As a side e ect, the tool-based generated results can be used
to discover discrepancies with existing design documents and interview results.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Presented in a Syntactically Standardized Language. One possible</title>
        <p>
          standardized graphical language is given by Data Flow Diagrams (DFD) as used
in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], annotated with additional security-relevant information. DFDs are well
suited for security analysis as they contain the interfaces that an attacker may
use and describe how data, often the target of attack, moves through the system.
2.2
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>STRIDE-based Elicitation of Technical Threat Scenarios</title>
        <p>A STRIDE-based elicitation of technical threat scenarios is the rst step for the
transition from high-level risk analysis into the practical security testing world.
Starting point are short, informal, and non-technical descriptions of business
worst case scenarios (BWCSs). Most risk analysis methods include the de nition
of BWCS or similar equivalents.</p>
        <p>The security analyst examines which security violations of system elements
could lead to the BWCSs. We call the tuple (system element, violated security
property 2) a technical threat scenario (TTS). The mappings of BWCSs to TTSs
are created top-down (given a BWCS, examine which combination of TTSs could
lead to it) and bottom-up (for each DFD element, check if a violation of any
security property, in combination with other TTSs, could lead to a BWCS).
2.3</p>
      </sec>
      <sec id="sec-2-4">
        <title>Mapping Technical Threat Scenarios to Test Types</title>
        <p>The mapping of BWCSs to TTSs is only the rst step towards security tests.
The TTSs are still too abstract and need to be further concretized. For this
purpose, we suggest the concept of test types. A test type is a template for a
class of tests which a security analyst can instantiate into an executable test by
adapting and completing it according to the implementation, con guration and
state of the SUV.
2 represented by one letter from the STRIDE acronym</p>
        <p>DFD-Pattern-based Rules One way to capture and leverage the security
testing expertise required to derive appropriate test types from TTSs is via
mapping rules. We suggest such rules to consist of the following elements:
{ A pattern in an annotated DFD. Besides a mandatory TTS which includes
the security property violation, the pattern can include additional system
elements and further annotations.
{ The level of sophistication for the security tests. It is determined by risk
considerations such as the expected attacker and the desired assurance.
{ A reference to the suggested test type that ts to the above characteristics.
Rules of this structure allow the test type derivation to become systematic and
traceable. The mapping rules suggest an initial set of test types which helps to
achieve a minimum quality standard.</p>
        <p>
          Our DFD-patterns-based rules are inspired by EMC's \Threat library" [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ],
but in contrast our method is a) to be used during testing not development,
b) uses explicit rules, c) considers SALs and security violations, and d) yields
concrete test types instead of abstract threats.
        </p>
        <p>Test Library The presented concept of mapping rules anticipated the idea
of a re-usable collection of test types. We suggest that the entries of such a test
library consist of a title, a textual description, and the information needed to be
matched by mapping rules. The latter allows to lter the library entries according
to the targeted system element, the security property to violate, the technology,
and the sophistication of the test. This supports security analysts that want to
go beyond the mere rule-generated minimum set of test types. Figure 1 shows
an exemplary mapping rule and an excerpt of the current test library.
(a)
(b)
The steps of our method can be integrated in an ISO 31000 conform risk analysis
process, provided that it estimates risk based on clearly de ned threat
scenarios. The suggested security overview is obtained in the context establishment
phase, together with the identi cation of non-technical assets and their security
requirements. Based on this, the BWCSs are derived in the risk analysis phase
and are then mapped to technical threat scenarios. The determination of
expected threat agents including their estimated skill level is also part of the risk
analysis phase. During the risk estimation phase, the likelihood of the
technical threat scenarios is estimated and risk values for the BWCSs are assigned.
The subsequent risk evaluation allows to prioritize the TTSs, which are then,
as part of the risk treatment, covered by appropriate tests.
3
3.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Future Work</title>
      <sec id="sec-3-1">
        <title>Extending the Collection of Mapping Rules</title>
        <p>
          We want to extend our current rule set in order to cover di erent SUVs from
di erent application domains. For this purpose, we intent to proceed according
to the following three-step procedure:
1. Identify classes of vulnerabilities to be covered. Appropriate sources are lists
of threats and vulnerabilities such as CAPEC and CWE, and secure software
development and security testing literature, such as [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], and [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
2. For each vulnerability class, analyse the technical context and identify
suitable environment properties, in order to determine a reliable pattern that
indicates the possible presence of the vulnerability.
3. Determine how the identi ed context information can be obtained, how it
can be represented in form of a DFD pattern, and which (possibly new) test
types match these patterns.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Tool Support</title>
        <p>
          Our goal is to further automate the technical system description. We therefore
plan to evaluate more advanced tools such as scriptable debuggers (e.g. pydbg,
IDA Pro, Immunity Debugger), advanced dynamic analysis tools (e.g.
WinApiOverride32, Microsoft Application Veri er), or operating system utilities (e.g.
strace, eventlog). First steps are described in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>
          Once we have extended our rule base, a manual application of rules for the
derivation of test types may become tiresome and ine cient. Our idea is to
develop a tool that, similar as described in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], processes annotated DFDs, detects
the patterns and nally outputs adequate test types.
3.3
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Further Evaluation</title>
        <p>We plan to apply the presented method, together with a light-weight ISO 31000
conform risk analysis, in future real-world security assessments. The idea is to
develop a questionnaire to capture observations after each assessment and thus
support a systematic evaluation regarding bene ts, drawbacks, practicability,
areas for improvement, and inherent limitations.</p>
        <p>We intend to analyse to what extend the tests, which have been identi ed
using our method, satisfy the following requirements: 1) the security test
addresses at least one BWCS. 2) it targets the proper system element and aims at
violating the right security property, 3) it has the proper level of sophistication
with respect to the expected threat agents 4) it re ects the technology,
implementation and con guration of the SUV, and 5) the test priority is high enough
with respect to the given time and budget.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>This work proposes a tool-supported method to bridge the gap between a risk
analysis and a corresponding security testing.</p>
      <p>The presented method requires a security analysts to critically accompany
the involved steps and adapt, possibly manually complement, and interpret the
intermediate results. However, our method can guarantee a certain quality
standard and, rst and foremost, will make security testing more traceable for all
involved parties including security testers, developers, and managers.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the EU Project SPaCIoS (FP7 257876,
Secure Provision and Consumption in the Internet of Services) and NESSoS (FP7
256890, Network of Excellence on Engineering Secure Future Internet Software).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>M.</given-names>
            <surname>Abi-Antoun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Torr</surname>
          </string-name>
          .
          <article-title>Checking threat modeling data ow diagrams for implementation conformance and security</article-title>
          .
          <source>In Proceedings of the twentysecond IEEE/ACM international conference on Automated software engineering</source>
          , pages
          <volume>393</volume>
          {
          <fpage>396</fpage>
          . ACM,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>D.</given-names>
            <surname>Dhillon</surname>
          </string-name>
          .
          <article-title>Developer-driven threat modeling: Lessons learned in the trenches</article-title>
          .
          <source>IEEE Security and Privacy</source>
          ,
          <volume>9</volume>
          (
          <issue>4</issue>
          ):
          <volume>41</volume>
          {
          <fpage>47</fpage>
          ,
          <year>July 2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>DIAMONDS</given-names>
            <surname>Consortium; F. Seehusen</surname>
          </string-name>
          (Editor).
          <article-title>Initial methodologies for model-based security testing and risk-basedsecurity testing</article-title>
          . http://www.itea2- diamonds.org/Publications/2012/index.html, Aug.
          <year>2012</year>
          .
          <article-title>Project Deliverable D3</article-title>
          .
          <source>WP4.T2 T3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>M.</given-names>
            <surname>Dowd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McDonald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Schuh</surname>
          </string-name>
          .
          <source>The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities. Addison-Wesley Professional</source>
          ,
          <volume>11</volume>
          <fpage>2006</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>T.</given-names>
            <surname>Gallagher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Landauer</surname>
          </string-name>
          , and
          <string-name>
            <surname>B.</surname>
          </string-name>
          <article-title>Je ries</article-title>
          .
          <source>Hunting Security Bugs</source>
          . Microsoft Press,
          <volume>6</volume>
          <fpage>2006</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>M.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>LeBlanc, and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Viega</surname>
          </string-name>
          .
          <article-title>24 Deadly Sins of Software Security: Programming Flaws and How to Fix Them</article-title>
          .
          <source>McGraw-Hill Osborne Media</source>
          ,
          <volume>9</volume>
          <fpage>2009</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>M.</given-names>
            <surname>Howard</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Lipner</surname>
          </string-name>
          . Security Development Lifecycle. Microsoft Press,
          <year>11 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>C.</given-names>
            <surname>Wysopal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nelson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Zovi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Dustin</surname>
          </string-name>
          .
          <source>The Art of Software Security Testing: Identifying Software Security Flaws. Addison-Wesley Professional</source>
          ,
          <volume>11</volume>
          <fpage>2006</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>