<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Manual vs. Automated Vulnerability Assessment: A Case Study</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">James</forename><forename type="middle">A</forename><surname>Kupsch</surname></persName>
							<email>kupsch@cs.wisc.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Sciences Department</orgName>
								<orgName type="institution">University of Wisconsin</orgName>
								<address>
									<settlement>Madison</settlement>
									<region>WI</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Barton</forename><forename type="middle">P</forename><surname>Miller</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Computer Sciences Department</orgName>
								<orgName type="institution">University of Wisconsin</orgName>
								<address>
									<settlement>Madison</settlement>
									<region>WI</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Manual vs. Automated Vulnerability Assessment: A Case Study</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">CABA0A315B83E8DCE5023C49F21E9A4C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The dream of every software development team is to assess the security of their software using only a tool. In this paper, we attempt to evaluate and quantify the effectiveness of automated source code analysis tools by comparing such tools to the results of an in-depth manual evaluation of the same system. We present our manual vulnerability assessment methodology, and the results of applying this to a major piece of software. We then analyze the same software using two commercial products, Coverity Prevent and Fortify SCA, that perform static source code analysis. These tools found only a few of the fifteen serious vulnerabilities discovered in the manual assessment, with none of the problems found by these tools requiring a deep understanding of the code. Each tool reported thousands of defects that required human inspection, with only a small number being security related. And, of this small number of security-related defects, there did not appear to be any that indicated significant vulnerabilities beyond those found by the manual assessment.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>While careful design practices are necessary to the construction of secure systems, they are only part of the process of designing, building, and deploying such a system. To have high confidence in a system's security, a systematic assessment of its security is needed before deploying it. Such an assessment, performed by an entity independent of the development team, is a crucial part of development of any secure system. Just as no serious software project would consider skipping the step of having their software evaluated for correctness by an independent testing group, a serious approach to security requires independent assessment for vulnerabilities. At the present time, such an assessment is necessarily an expensive task as it involves a significant commitment of time from a security analyst. While using automated tools is an attractive approach to making this task less labor intensive, even the best of these tools appear limited in the kinds of vulnerabilities that they can identify. In this paper, we attempt to evaluate and quantify the effectiveness of automated source code vulnerability assessment tools <ref type="bibr" target="#b0">[1]</ref> by comparing such tools to the results of an in-depth manual evaluation of the same system.</p><p>We started with a detailed vulnerability assessment of a large, complex, and widely deployed distributed system called Condor <ref type="bibr" target="#b7">[11,</ref><ref type="bibr" target="#b10">15,</ref><ref type="bibr">2]</ref>. Condor is a system that allows the scheduling of complex tasks over local and widely distributed networks of computers that span multiple organizations. It handles scheduling, authentication, data staging, failure detection and recovery, and performance monitoring. The assessment methodology that we developed, called First Principles Vulnerability Assessment (FPVA), uses a top-down resource centric approach to assessment that attempts to identify the components of a systems that are most at risk, and then identifying vulnerabilities that might be associated with them. The result of such an approach is to focus on the places in the code where high value assets might be attacked (such as critical configuration files, parts of the code that run at high privilege, or security resources such as digital certificates). This approach shares many characteristics with techniques such as Microsoft's threat modeling <ref type="bibr" target="#b9">[14]</ref> but with a key difference: we start from high valued assets and work outward to derive vulnerabilities rather than start with vulnerabilities and then see if they lead to a serious exploit.</p><p>In 2005 and 2006, we performed an analysis on Condor using FPVA, resulting in the discovery of fifteen major vulnerabilities. These vulnerabilities were all confirmed by developing sample exploit code that could trigger each one.</p><p>More recently, we made an informal survey of security practitioners in industry, government, and academia to identify what were the best automated tools for vulnerability assessment. Uniformly, the respondents identified two highlyregarded commercial tools: Coverity Prevent <ref type="bibr" target="#b2">[5]</ref> and Fortify Source Code Analyzer (SCA) <ref type="bibr" target="#b4">[8]</ref> (while these companies have multiple products, in the remainder of this paper we will refer to Coverity Prevent and Fortify Source Code Analyzer as "Coverity" and "Fortify" respectively). We applied these tools to the same version of Condor as was used in the FPVA study to compare the ability of these tools to find serious vulnerabilities (having a low false negative rate), while not reporting a significant number of false vulnerabilities or vulnerabilities with limited exploit value (having a low false positive rate).</p><p>The most significant findings from our comparative study were:</p><p>1. Of the 15 serious vulnerabilities found in our FPVA study of Condor, Fortify found six and Coverity only one. 2. Both Fortify and Coverity had significant false positive rates with Coverity having a lower false positive rate. The volume of these false positives were significant enough to have a serious impact on the effectiveness of the analyst. 3. In the Fortify and Coverity results, we found no significant vulnerabilities beyond those identified by our FPVA study. (This was not an exhaustive study, but did thoroughly cover the problems that the tools identified as most serious.)</p><p>To be fair, we did not expect the automated tools to find all the problems that could be found by an experienced analyst using a systematic methodology.</p><p>The goals of this study were <ref type="bibr" target="#b0">(1)</ref> to try to identify the places where an automated analysis can simplify the assessment task, and (2) start to characterize the kind of problems not found by these tools so that we can develop more effective automated analysis techniques.</p><p>One could claim that the results of this study are not surprising, but there are no studies to provide strong evidence of the strengths and weaknesses of software assessment tools. The contributions of this paper include:</p><p>1. showing clearly the limitations of current tools, 2. presenting manual vulnerability assessment as a required part of a comprehensive security audit, and 3. creating a reference set of vulnerabilities to perform apples-to-apples comparisons.</p><p>In the next section, we briefly describe our FPVA manual vulnerability assessment methodology, and then in Section 3, we describe the vulnerabilities that were found when we applied FPVA to the Condor system. Next, in Section 4, we describe the test environment in which the automated tools were run and how we applied Coverity and Fortify to Condor. Section 5 describes the results from this study along with a comparison of these results to our FPVA analysis. The paper concludes with comments on how the tools performed in this analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">First Principles Vulnerability Assessment (FPVA)</head><p>This section briefly describes the methodology used to find most of the vulnerabilities used in this study. Most of the vulnerabilities in Condor were discovered using a manual vulnerability assessment we developed at the University of Wisconsin called first principles vulnerability assessment (FPVA). The assessment was done independently, but in cooperation with the Condor development team.</p><p>FPVA consists of four analyses where each relies upon the prior steps to focus the work in the current step. The first three steps, architectural, resource, and trust and privilege analyses are designed to assist the assessor in understand the operation of the system under study. The final step, the component evaluation, is where the search for vulnerabilities occurs using the prior analyses and code inspection. This search focuses on likely high-value resources and pathways through the system.</p><p>The architectural analysis is the first step of the methodology and is used to identify the major structural components of the system, including hosts, processes, external dependencies, threads, and major subsystems. For each of these components, we then identify their high-level function and the way in which they interact, both with each other and with users. Interactions are particularly important as they can provide a basis to understand the information flow through, and how trust is delegated through the system. The artifact produced at this stage is a document that diagrams the structure of the system and the interactions.</p><p>The next step is the resource analysis. This step identifies the key resources accessed by each component, and the operations supported on these resources. Resources include things such as hosts, files, databases, logs, CPU cycles, storage, and devices. Resources are the targets of exploits. For each resource, we describe its value as an end target (such as a database with personnel or proprietary information) or as an intermediate target (such as a file that stores access-permissions). The artifact produced at this stage is an annotation of the architectural diagrams with resource descriptions.</p><p>The third step is the trust and privilege analysis. This step identifies the trust assumptions about each component, answering such questions as how are they protected and who can access them? For example, a code component running on a client's computer is completely open to modification, while a component running in a locked computer room has a higher degree of trust. Trust evaluation is also based on the hardware and software security surrounding the component. Associated with trust is describing the privilege level at which each executable component runs. The privilege levels control the extent of access for each component and, in the case of exploitation, the extent of damage that it can directly accomplish. A complex but crucial part of trust and privilege analysis is evaluating trust delegation. By combining the information from steps 1 and 2, we determine what operations a component will execute on behalf of another component. The artifact produced at this stage is a further labeling of the basic diagrams with trust levels and labeling of interactions with delegation information.</p><p>The fourth step is component evaluations, where components are examined in depth. For large systems, a line-by-line manual examination of the code is infeasible, even for a well-funded effort. The step is guided by information obtained in steps 1-3, helping to prioritize the work so that high-value targets are evaluated first. Those components that are part of the communication chain from where user input enters the system to the components that can directly control a strategic resource are the components that are prioritized for assessment. There are two main classifications of vulnerabilities: design (or architectural) flaws, and implementation bugs <ref type="bibr" target="#b8">[12]</ref>. Design flaws are problems with the architecture of the system and often involve issues of trust, privilege, and data validation. The artifacts from steps 1-3 can reveal these types of problems or greatly narrow the search. Implementation bugs are localized coding errors that can be exploitable. Searching the critical components for these types of errors results in bugs that have a higher probability of exploit as they are more likely to be in the chain of processing from users input to critical resource. Also the artifacts aid in determining if user input can flow through the implementation bug to a critical resource and allow the resource to be exploited.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Results of the Manual Assessment</head><p>Fifteen vulnerabilities in the Condor project had been discovered and documented in 2005 and 2006. Most of these were discovered through a systematic, manual vulnerability assessment using the FPVA methodology, with a couple of these vulnerabilities being reported by third parties. Table <ref type="table" target="#tab_0">1</ref> lists each vulnerability along with a brief description. A complete vulnerability report that includes full details of each vulnerability is available from the Condor project <ref type="bibr" target="#b1">[4]</ref> for most of the vulnerabilities.</p><p>The types of problems discovered included a mix of implementation bugs and design flaws. The following vulnerabilities are caused by implementation bugs: CONDOR-2005-0003 and CONDOR-2006-000{1,2,3,4,8,9}. The remaining vulnerabilities are caused by design flaws. The vulnerability CONDOR-2006-0008 is unusual in that it only exists on certain older platforms that only provide an unsafe API to create a temporary file. This vulnerability is a command injection <ref type="bibr" target="#b3">[7]</ref> resulting from user supplied data used to form a string. This string is then interpreted by /bin/sh using a fork and execl("/bin/sh", "-c", command).</p><p>Easy. Should consider network and file data as tainted and all the parameters to execl as sensitive.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONDOR-2005-0004</head><p>no no This vulnerability is caused by the insecure owner of a file used to store persistent overridden configuration entries. These configuration entries can cause arbitrary executable files to be started as root.</p><p>Difficult. Would have to track how these configuration setting flow into complex data structure before use, both from files that have the correct ownership and permissions and potentially from some that do not.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONDOR-2005-0005</head><p>no no This vulnerability is caused by the lack of an integrity <ref type="bibr" target="#b3">[7]</ref> check on checkpoints (a representation of a running process that can be restarted) that are stored on a checkpoint server. Without a way of ensuring the integrity of the checkpoint, the checkpoint file could be tampered with to run malicious code.</p><p>Difficult. This is a high level design flaw that a particular server should not be trusted. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Setup and Running of Study</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Experiment Setup</head><p>To perform the evaluation of the Fortify and Coverity tools, we used the same version of Condor, run in the same environment, as was used in our FPVA analysis. The version of the source code, platform and tools used in this test were as follows:</p><p>1. Condor 6.7.12 (a) with 13 small patches to allow compilation with newer GNU compiler collection (gcc) <ref type="bibr" target="#b5">[9]</ref>;</p><p>(b) built as a clipped [3] version, i.e., no standard universe, Kerberos, or Quill as these would not build without extensive work on the new platform and tool chain. 2. gcc (GCC) 3. To get both tools to work required using a version of gcc that was newer than had been tested with Condor 6.7.12. This necessitated 13 minor patches to prevent gcc from stopping with an error. Also this new environment prevented building Condor with standard universe support, Kerberos, and Quill. None of these changes affected the presence of the discovered vulnerabilities.</p><p>The tools were run using their default settings except Coverity was passed the flag --all to enable all the analysis checkers (Fortify enables all by default).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Tool Operation</head><p>Both tools operate in a similar three step fashion: gather build information, analyze, and present results. The build information consists of the files to compile, and how they are used to create libraries and executable files. Both tools make this easy to perform by providing a program that takes as arguments the normal command used to build the project. The information gathering tool monitors the build's commands to create object files, libraries and executables.</p><p>The second step performs the analysis. This step is also easily completed by running a program that takes the result of the prior step as an input. The types of checkers to run can also be specified. The general term defect will be used to describe the types of problems found by the tools as not all problems result in a vulnerability.</p><p>Finally, each tool provides a way to view the results. Coverity provides a web interface, while Fortify provides a stand-alone application. Both viewers allow the triage and management of the discovered defects. The user can change attributes of the defect (status, severity, assigned developer, etc.) and attach additional information. The status of previously discovered defects in earlier analysis runs is remembered, so the information does not need to be repeatedly entered.</p><p>Each tool has a collection of checkers that categorize the type of defects. The collection of checkers depends on the source language and the options used during the analysis run. Fortify additionally assigns each defect a severity level of Critical, Hot, Warning and Info. Coverity does not assign a severity, but allows one to be assigned by hand.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Tool Output Analysis</head><p>After both tools were run on the Condor source, the results from each tool were reviewed against the known vulnerabilities and were also sampled to look for vulnerabilities that were not found using the FPVA methodology.</p><p>The discovered vulnerabilities were all caused by code at one or at most a couple of a lines or functions. Both tools provided interfaces that allowed browsing the found defects by file and line. If the tool reported a defect at the same location in the code and of the correct type the tool was determined to have found the vulnerability.</p><p>The defects discovered by the tools were also sampled to determine if the tools discovered other vulnerabilities and to understand the qualities of the defects. The sampling was weighted to look more at defects found in higher impact locations in the code and in the categories of defects that are more likely to impact security. We were unable to conduct an exhaustive review the results due to time constraints and the large number of defects presented by the tools.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Results of the Automated Assessment</head><p>This section describes the analysis of the defects found by Coverity and Fortify. We first compare the results of the tools to the vulnerabilities found by FPVA. Next we empirically look at the false positive and false negative rates of the tools and the reasons behind these. Finally we offer some commentary on how the tools could be improved.</p><p>Fortify discovered all the vulnerabilities we expected it to find, those caused by implementation bugs, while Coverity only found a small subset. Each tool reported a large number of defects. Many of these are indications of potential correctness problems, but out of those inspected none appeared to be a significant vulnerability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Tools Compared to FPVA Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1 presents each vulnerability along with an indication if Coverity or Fortify also discovered the vulnerability.</head><p>Out of the fifteen known vulnerabilities in the code, Fortify found six of them, while Coverity only discovered one of them. Vulnerability CONDOR-2006-0001 results from three nearly identical vulnerability instances in the code, and vulnerability CONDOR-2006-0002 results from six nearly identical instances. Fortify discovered all instances of these two vulnerabilities, while Coverity found none of them.</p><p>All the vulnerabilities discovered by both tools were due to Condor's use of functions that commonly result in security problems such as execl, popen, system and strcpy. Some of the defects were traced to untrusted inputs being used in these functions. The others were flagged solely due to the dangerous nature of these functions. These vulnerabilities were simple implementation bugs that could have been found by using simple scripts based on tools such as grep to search for the use of these functions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Tool Discovered Defects</head><p>Table <ref type="table" target="#tab_3">2</ref> reports the defects that we found when using Fortify, dividing the defects into categories with a count of how often each defect category occurred. Table <ref type="table" target="#tab_4">3</ref> reports the defects found when using Coverity. The types of checkers that each tool reports are not directly comparable, so no effort was done to do so. Fortify found a total of 15,466 defects while Coverity found a total of 2,686. The difference in these numbers can be attributed to several reasons: 1. differences in the analysis engine in each product; 2. Coverity creates one defect for each sink (place in the code where bad data is used in a way to cause the defect, and displays one example source to sink path), while Fortify has one defect for each source/sink pair; and 3. Coverity seems to focus on reducing the number of false positives at the risk of missing true positives, while Fortify is more aggressive in reporting potential problems resulting in more false positives.</p><p>From a security point of view, the sampled defects can be categorized in order of decreasing importance as follows:</p><p>1. Security Issues. These problems are exploitable. Other than the vulnerabilities also discovered in the FPVA (using tainted data in risk functions), the only security problems discovered were of a less severe nature. They included denial of service issues due to the dereference of null pointers, and resource leaks. 2. Correctness Issues. These defects are those where the code will malfunction, but the security of the application is not affected. These are caused by problems such as (1) a buffer overflow of a small number of bytes that may cause incorrect behavior, but do not allow execution of arbitrary code or other security problems, (2) the use of uninitialized variables, or (3) the failure to check the status of certain functions. 3. Code Quality Issues. Not all the defects found are directly security related, such as Coverity's parse warnings (those starting with PW), dead code and unused variables, but they are a sign of code quality and can result in security problem in the right circumstances.</p><p>Due to the general fragility of code, small changes in code can easily move a defect from one category to another, so correcting the non-security defects could prevent future vulnerabilities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">False Positives</head><p>False positives are the defects that the tool reports, but are not actually defects. Many of these reported defects are items that should be repaired as they often are caused by poor programming practices that can easily develop into a true defect during modifications to the code. Given the finite resources in any assessment activity, these types of defects are rarely fixed. Ideally, a tool such   as Fortify or Coverity is run regularly during the development cycle, allowing the programmers to fix such defects as they appear (resulting in a lower false positive rate). In reality, these tools are usually applied late in the lifetime of a software system. Some of the main causes of false positives found in this study are the following:</p><p>1. Non-existent code paths due to functions that never return due to an exit or exec type function. Once in a certain branch, the program is guaranteed to never execute any more code in the program due to these functions and the way that code is structured, but the tool incorrectly infers that it can continue past this location. 2. Correlated variables, where the value of one variable restricts the set of values the other can take. This occurs when a function returns two values, or two fields of a structure. For instance, a function could return two values, one a pointer and the other a boolean indicating that the pointer is valid; if the boolean is checked before the dereferencing of the pointer, the code is correct, but if the tool does not track the correlation it appears that a null pointer dereference could occur. 3. The value of a variable is restricted to a subset of the possible values, but is not deduced by the tool. For instance, if a function can return only two possible errors, and a switch statement only handles these exact two errors, the code is correct, but a defect is produced due to not all possible errors being handled. 4. Conditions outside of the function prevent a vulnerability. This is caused when the tool does not deduce that: (a) Data read from certain files or network connections should be trusted due to file permissions or prior authentication. (b) The environment is secure due to a trusted parent process securely setting the environment. (c) A variable is constrained to safe values, but it is hard to deduce.</p><p>The false positives tend to cluster in certain checkers (and severity levels in Fortify). Some checkers will naturally have less reliability than others. The other cause of the cluster is due to developers repeating the same idiom throughout the code. For instance, almost all of the 330 UNINIT defects that Coverity reports are false positives due to a recurring idiom.</p><p>Many of these false positive defects are time bombs waiting for a future developer to unwittingly make a change somewhere in the code that affects the code base to now allow the defect to be true. A common example of this is a string buffer overflow, where the values placed in the buffer are currently too small in aggregate to overflow the buffer, but if one of these values is made bigger or unlimited in the future, the program now has a real defect.</p><p>Many of the false positives can be prevented by switching to a safer programming idiom, where it should take less time to make this change than for a developer to determine if the defect is actually true or false. The uses of sprintf, strcat and strcpy are prime examples of this.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4">False Negatives</head><p>False negatives are defects in the code that the tool did not report. These defects include the following:</p><p>1. Defects that are high level design flaws. These are the most difficult defects for a tool to detect as the tool would have to understand design requirements not present in the code. 2. The dangerous code is not compiled on this platform. The tools only analyze the source code seen when the build information gathering step is run. The tools ignore files that were not compiled and parts of files that were conditionally excluded. A human inspecting the code can easily spot problems that occur in different build configurations. 3. Tainted data becomes untainted. The five vulnerabilities that Fortify found, but Coverity did not were caused by Coverity only reporting an issue with functions such as execl, popen and system if the data is marked as tainted.</p><p>The tainted property of strings is only transitive when calling certain functions such as strcpy or strcat. For instance, if a substring is copied byte by byte, Coverity does not consider the destination string as tainted. 4. Data flows through a pointer to a heap data structure, that the tool cannot track.</p><p>Some of these are defects that a tool will never find, while some of these will hopefully be found by tools in the future as the quality of their analysis improves.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.5">Improving the Tool's Results</head><p>Both tools allow the analyst to provide more information to the tool to increase the tools accuracy. This information is described by placing annotations in the source code, or a simple description of the additional properties can be imported into the tools analysis model.</p><p>A simple addition could be made to Coverity's model to flag all uses of certain system calls as unsafe. This would report all the discovered vulnerabilities that Fortify found along with all the false positives for these types of defects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>This study demonstrates the need for manual vulnerability assessment performed by a skilled human as the tools did not have a deep enough understanding of the system to discover all of the known vulnerabilities.</p><p>There were nine vulnerabilities that neither tools discovered. In our analysis of these vulnerabilities, we did not expect a tool to find them due as they are caused by design flaws or were not present in the compiled code.</p><p>Out of the remaining six vulnerabilities, Fortify did find them all, and Coverity found a subset and should be able to find the others by adding a small model.</p><p>We expected a tool and even a simple to tool to be able to discover these vulnerabilities as they were simple implementation bugs.</p><p>The tools are not perfect, but they do provide value over a human for certain implementation bugs or defects such as resource leaks. They still require a skilled operator to determine the correctness of the results, how to fix the problem and how to make the tool work better.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>D</head><label></label><figDesc>. Chadwick, I. You and H. Chang (Eds.): Proceedings of the 1st International Workshop on Managing Insider Security Threats (MIST2009), Purdue University, West Lafayette, USA, June 16, 2009. *Copyright is held by the author(s)*</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>4.6 20060404 (Red Hat 3.4.6-10) 3. Scientific Linux SL release 4.7 (Beryllium) [13] 4. Fortify SCA 5.1.0016 rule pack 2008.3.0.0007 5. Coverity Prevent 4.1.0</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>0 B u ff e rO v e r fl o w :O ff -b ym a tS t r i n g :A r g u m e n tT y p eM i s m a t c o r yL e a k :R e a l l o c a t i o e nM i s u s e d :A u t h e n t i c a t i o e nM i s u s e d :F i l eS y s t e e nM i s u s e d :P r i v i l e g eM a n a g e m e n eM i s m a t c h :S i g n e dt oU n s i g n e e l e a s e dR e s o u r c e :S y n c h r o n i z a t i o</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Summary of Condor vulnerabilities discovered in 2005 and 2006 and whether Fortify or Coverity discovered the vulnerability.</figDesc><table><row><cell>Vuln. Id</cell><cell cols="3">Fortify Coverity Vulnerability Description</cell><cell cols="2">Tool Discoverable?</cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>A path is formed by concatenating</cell><cell cols="2">Difficult. Would have</cell></row><row><cell>2005-0001</cell><cell></cell><cell></cell><cell>three pieces of user supplied data</cell><cell cols="2">to know path was</cell></row><row><cell></cell><cell></cell><cell></cell><cell>with a base directory path to form</cell><cell cols="2">formed from untrusted</cell></row><row><cell></cell><cell></cell><cell></cell><cell>a path to to create, retrieve or re-</cell><cell cols="2">data, not validated</cell></row><row><cell></cell><cell></cell><cell></cell><cell>move a file. This data is used as is</cell><cell>properly, and</cell><cell>that</cell></row><row><cell></cell><cell></cell><cell></cell><cell>from the client which allows a direc-</cell><cell cols="2">a directory traversal</cell></row><row><cell></cell><cell></cell><cell></cell><cell>tory traversal [7] to manipulate arbi-</cell><cell cols="2">could occur. Could</cell></row><row><cell></cell><cell></cell><cell></cell><cell>trary file locations.</cell><cell cols="2">warn about untrusted</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">data used in a path.</cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>This vulnerability is a lack of au-</cell><cell cols="2">Difficult. Would have</cell></row><row><cell>2005-0002</cell><cell></cell><cell></cell><cell>thentication and authorization. This</cell><cell cols="2">to know that there</cell></row><row><cell></cell><cell></cell><cell></cell><cell>allows impersonators to manipulate</cell><cell cols="2">should be an authen-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>checkpoint files owned by others.</cell><cell cols="2">tication and authoriza-</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">tion mechanism, which</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>is missing.</cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>no</cell><cell></cell><cell></cell></row><row><cell>2005-0003</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 -</head><label>1</label><figDesc>Continued.    </figDesc><table><row><cell>Vuln. Id</cell><cell cols="3">Fortify Coverity Vulnerability Description</cell><cell>Tool Discoverable?</cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>Internally the Condor system will not</cell><cell>Difficult. Tool would</cell></row><row><cell>2005-0006</cell><cell></cell><cell></cell><cell>run user's jobs with the user id of</cell><cell>have to know which ac-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>the root account. There are other ac-</cell><cell>counts should be al-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>counts on machines which should also</cell><cell>lowed to be used for</cell></row><row><cell></cell><cell></cell><cell></cell><cell>be restricted, but there are no mech-</cell><cell>what purposes.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>anisms to support this.</cell><cell></cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>no</cell><cell>The stork subcomponent of Condor,</cell><cell>Easy. The string used</cell></row><row><cell>2006-0001</cell><cell></cell><cell></cell><cell>takes a URI for a source and destina-</cell><cell>as the parameter to</cell></row><row><cell></cell><cell></cell><cell></cell><cell>tion to move a file. If the destination</cell><cell>system comes fairly di-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>file is local and the directory does</cell><cell>rectly from an un-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>not exist the code uses the system</cell><cell>trusted argv value.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>function to create it without properly</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>quoting the path. This allows a com-</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>mand injection to execute arbitrary</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>commands. There are 3 instances of</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>this vulnerability.</cell><cell></cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>no</cell><cell>The stork subcomponent of Condor,</cell><cell>Easy. The string used</cell></row><row><cell>2006-0002</cell><cell></cell><cell></cell><cell>takes a URI for a source and desti-</cell><cell>as the parameter to</cell></row><row><cell></cell><cell></cell><cell></cell><cell>nation to move a file. Certain com-</cell><cell>popen comes from a</cell></row><row><cell></cell><cell></cell><cell></cell><cell>binations of schemes of the source</cell><cell>substring of an un-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>and destination URIs cause stork to</cell><cell>trusted argv value.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>call helper applications using a string</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>created with the URIs, and without</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>properly quoting them. This string is</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>then passed to popen, which allows</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>a command injection to execute ar-</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>bitrary commands. There are 6 in-</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>stances of this vulnerability.</cell><cell></cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>no</cell><cell>Condor class ads allow functions. A</cell><cell>Easy. A call to popen</cell></row><row><cell>2006-0003</cell><cell></cell><cell></cell><cell>function that can be enabled, ex-</cell><cell>uses data from an un-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>ecutes an external program whose</cell><cell>trusted source such as</cell></row><row><cell></cell><cell></cell><cell></cell><cell>name and arguments are specified by</cell><cell>the network or a file.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>the user. The output of the program</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>becomes the result of the function.</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>The implementation of the function</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>uses popen without properly quoting</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>the user supplied data.</cell><cell></cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>no</cell><cell>Condor class ads allow functions. A</cell><cell>Easy. A call to popen</cell></row><row><cell>2006-0004</cell><cell></cell><cell></cell><cell>function that can be enabled, ex-</cell><cell>uses data from an un-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>ecutes an external program whose</cell><cell>trusted source such as</cell></row><row><cell></cell><cell></cell><cell></cell><cell>name and arguments are specified by</cell><cell>the network or a file. It</cell></row><row><cell></cell><cell></cell><cell></cell><cell>the user. The path of the program to</cell><cell>would be difficult for a</cell></row><row><cell></cell><cell></cell><cell></cell><cell>run is created by concatenating the</cell><cell>tool to determine if an</cell></row><row><cell></cell><cell></cell><cell></cell><cell>script directory path with the name</cell><cell>actual path traversal is</cell></row><row><cell></cell><cell></cell><cell></cell><cell>of the script. Nothing in the code</cell><cell>possible.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>checks that the script name cannot</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>contain characters that allows for a</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>directory traversal.</cell><cell></cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>This vulnerability involves user sup-</cell><cell>Difficult. Would have</cell></row><row><cell>2006-0005</cell><cell></cell><cell></cell><cell>plied data being written as records</cell><cell>to deduce the format</cell></row><row><cell></cell><cell></cell><cell></cell><cell>to a file with the file later reread and</cell><cell>of the file and that the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>parsed into records. Records are de-</cell><cell>injection was not pre-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>limited by a new line, but the code</cell><cell>vented.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>does not escape new lines or prevent</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>them in the user supplied data. This</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>allows additional records to be in-</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>jected into the file.</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 -</head><label>1</label><figDesc>Continued.    </figDesc><table><row><cell>Vuln. Id</cell><cell cols="3">Fortify Coverity Vulnerability Description</cell><cell cols="3">Tool Discoverable?</cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>This vulnerability involves an au-</cell><cell cols="4">Difficult. Would re-</cell></row><row><cell>2006-0006</cell><cell></cell><cell></cell><cell>thentication mechanism that as-</cell><cell cols="4">quire the tool to</cell></row><row><cell></cell><cell></cell><cell></cell><cell>sumes a file with a particular name</cell><cell cols="4">understand why the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>and owner can be created only by the</cell><cell cols="4">existence and proper-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>owner or the root user. This is not</cell><cell cols="4">ties are being checked</cell></row><row><cell></cell><cell></cell><cell></cell><cell>true as any user can create a hard</cell><cell cols="4">and that they can be</cell></row><row><cell></cell><cell></cell><cell></cell><cell>link, in a directory they write, to any</cell><cell cols="4">attacked in certain</cell></row><row><cell></cell><cell></cell><cell></cell><cell>file and the file will have the permis-</cell><cell cols="3">circumstances.</cell></row><row><cell></cell><cell></cell><cell></cell><cell>sions and owner of the linked file, in-</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>validating this assumption.[10]</cell><cell></cell><cell></cell><cell></cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>This vulnerability is due to a vulner-</cell><cell cols="2">Difficult.</cell><cell>The</cell><cell>tool</cell></row><row><cell>2006-0007</cell><cell></cell><cell></cell><cell>ability in OpenSSL [6] and requires a</cell><cell cols="4">would have to have</cell></row><row><cell></cell><cell></cell><cell></cell><cell>newer version of the library to miti-</cell><cell cols="4">a list of vulnerable</cell></row><row><cell></cell><cell></cell><cell></cell><cell>gate.</cell><cell cols="4">library versions. It</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">would also be difficult</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">to discover if the tool</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">were run on the library</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">code as the defect is</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="3">algorithmic.</cell></row><row><cell>CONDOR-</cell><cell>no</cell><cell>no</cell><cell>This vulnerability is caused by using</cell><cell cols="4">Hard. The unsafe func-</cell></row><row><cell>2006-0008</cell><cell></cell><cell></cell><cell>a combination of the functions tmpnam</cell><cell cols="4">tion is only used (com-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>and open to try and create a new file.</cell><cell cols="4">piled) on a small num-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>This allows an attacker to use a clas-</cell><cell cols="4">ber of platforms. This</cell></row><row><cell></cell><cell></cell><cell></cell><cell>sic time of check, time of use (TOC-</cell><cell cols="4">would be easy for a</cell></row><row><cell></cell><cell></cell><cell></cell><cell>TOU) [7] attack against the program</cell><cell cols="4">tool to detect if the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>to trick the program into opening an</cell><cell cols="4">unsafe version is com-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>existing file. On platforms that have</cell><cell cols="4">piled. Since the safe</cell></row><row><cell></cell><cell></cell><cell></cell><cell>the function mkstemp, it is safely used</cell><cell cols="4">function mkstemp ex-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>instead.</cell><cell cols="4">isted on the system,</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">the unsafe version was</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">not seen by the tools.</cell></row><row><cell>CONDOR-</cell><cell>yes</cell><cell>yes</cell><cell>This vulnerability is caused by user</cell><cell cols="4">Easy. No bounds check</cell></row><row><cell>2006-0009</cell><cell></cell><cell></cell><cell>supplied values being placed in a</cell><cell cols="4">is performed when</cell></row><row><cell></cell><cell></cell><cell></cell><cell>fixed sized buffer that lack bounds</cell><cell cols="4">writing to a fixed</cell></row><row><cell></cell><cell></cell><cell></cell><cell>checks. The user can then cause a</cell><cell>sized</cell><cell cols="3">buffer (using</cell></row><row><cell></cell><cell></cell><cell></cell><cell>buffer overflow [16] that can result in</cell><cell cols="4">the dangerous func-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>a crash or stack smashing attack.</cell><cell cols="4">tion strcpy) and the</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">data comes from an</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="3">untrusted source.</cell></row><row><cell>Total</cell><cell>6</cell><cell>1</cell><cell>out of 15 total vulnerabilities</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 .</head><label>2</label><figDesc>Defect counts reported by Fortify by type and severity level.</figDesc><table><row><cell>Vuln Type</cell><cell cols="2">Total Critical</cell><cell cols="2">Hot Warning</cell><cell>Info</cell></row><row><cell>Buffer Overflow</cell><cell>2903</cell><cell>0</cell><cell>1151</cell><cell>391</cell><cell>1361</cell></row><row><cell>Buffer Overflow: Format String</cell><cell>1460</cell><cell>0</cell><cell>995</cell><cell>465</cell><cell>0</cell></row><row><cell>Buffer Overflow: Format String (%f/%F)</cell><cell>75</cell><cell>0</cell><cell>42</cell><cell>33</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 3 .</head><label>3</label><figDesc>Defect counts reported by Coverity by type.</figDesc><table><row><cell>Total Vulnerability Type</cell><cell>Total Vulnerability Type</cell></row><row><cell>2 ARRAY VS SINGLETON</cell><cell>38 REVERSE INULL</cell></row><row><cell>1 ATOMICITY</cell><cell>0 REVERSE NEGATIVE</cell></row><row><cell cols="2">0 BAD ALLOC ARITHMETIC 842 SECURE CODING</cell></row><row><cell>0 BAD ALLOC STRLEN</cell><cell>4 SECURE TEMP</cell></row><row><cell>0 BAD COMPARE</cell><cell>2 SIZECHECK</cell></row><row><cell>0 BAD FREE</cell><cell>0 SLEEP</cell></row><row><cell>1 BAD OVERRIDE</cell><cell>378 STACK USE</cell></row><row><cell>1 BUFFER SIZE</cell><cell>1 STREAM FORMAT STATE</cell></row><row><cell>32 BUFFER SIZE WARNING</cell><cell>2 STRING NULL</cell></row><row><cell>5 CHAR IO</cell><cell>147 STRING OVERFLOW</cell></row><row><cell>82 CHECKED RETURN</cell><cell>10 STRING SIZE</cell></row><row><cell>0 CHROOT</cell><cell>6 TAINTED SCALAR</cell></row><row><cell>2 CTOR DTOR LEAK</cell><cell>43 TAINTED STRING</cell></row><row><cell>29 DEADCODE</cell><cell>26 TOCTOU</cell></row><row><cell>5 DELETE ARRAY</cell><cell>0 UNCAUGHT EXCEPT</cell></row><row><cell>0 DELETE VOID</cell><cell>330 UNINIT</cell></row><row><cell>0 EVALUATION ORDER</cell><cell>96 UNINIT CTOR</cell></row><row><cell>40 FORWARD NULL</cell><cell>9 UNREACHABLE</cell></row><row><cell>2 INFINITE LOOP</cell><cell>31 UNUSED VALUE</cell></row><row><cell>0 INTEGER OVERFLOW</cell><cell>12 USE AFTER FREE</cell></row><row><cell>0 INVALIDATE ITERATOR</cell><cell>5 VARARGS</cell></row><row><cell>0 LOCK</cell><cell>0 WRAPPER ESCAPE</cell></row><row><cell>0 LOCK FINDER</cell><cell>1 PW.BAD MACRO REDEF</cell></row><row><cell>3 MISSING LOCK</cell><cell>5 PW.BAD PRINTF FORMAT STRING</cell></row><row><cell>17 MISSING RETURN</cell><cell>56 PW.IMPLICIT FUNC DECL</cell></row><row><cell>17 NEGATIVE RETURNS</cell><cell>1 PW.IMPLICIT INT ON MAIN</cell></row><row><cell>18 NO EFFECT</cell><cell>18 PW.INCLUDE RECURSION</cell></row><row><cell>32 NULL RETURNS</cell><cell>20 PW.MISSING TYPE SPECIFIER</cell></row><row><cell>4 OPEN ARGS</cell><cell>46 PW.NON CONST PRINTF FORMAT STRING</cell></row><row><cell>4 ORDER REVERSAL</cell><cell>2 PW.PARAMETER HIDDEN</cell></row><row><cell>3 OVERRUN DYNAMIC</cell><cell>20 PW.PRINTF ARG MISMATCH</cell></row><row><cell>30 OVERRUN STATIC</cell><cell>10 PW.QUALIFIER IN MEMBER DECLARATION</cell></row><row><cell>3 PASS BY VALUE</cell><cell>2 PW.TOO FEW PRINTF ARGS</cell></row><row><cell>1 READLINK</cell><cell>7 PW.TOO MANY PRINTF ARGS</cell></row><row><cell>150 RESOURCE LEAK</cell><cell>11 PW.UNRECOGNIZED CHAR ESCAPE</cell></row><row><cell>0 RETURN LOCAL</cell><cell>21 PW.USELESS TYPE QUALIFIER ON-</cell></row><row><cell></cell><cell>RETURN TYPE</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Acknowledgments</head><p>This research funded in part by National Science Foundation grants OCI-0844219, CNS-0627501, and CNS-0716460.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Secure Programming with Static Analysis</title>
		<author>
			<persName><forename type="first">Brian</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jacob</forename><surname>West</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Addison-Wesley</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="http://www.cs.wisc.edu/condor/security/vulnerabilities" />
		<title level="m">Condor Vulnerability Reports</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">Coverity</forename><surname>Inc</surname></persName>
		</author>
		<ptr target="http://www.coverity.com" />
		<title level="m">Prevent</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities</title>
		<author>
			<persName><forename type="first">Mark</forename><surname>Dowd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Mcdonald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Justin</forename><surname>Schuh</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Addison-Wesley</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="http://www.fortify.com" />
		<title level="m">Source Code Analyzer (SCA</title>
				<imprint/>
		<respStmt>
			<orgName>Fortify Software Inc.</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="http://gcc.gnu.org" />
		<title level="m">GNU Compiler Collection</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">How to Open a File and Not Get Hacked</title>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">A</forename><surname>Kupsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barton</forename><forename type="middle">P</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 Third International Conference on Availability, Reliability and Security</title>
				<meeting>the 2008 Third International Conference on Availability, Reliability and Security</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1196" to="1203" />
		</imprint>
	</monogr>
	<note>ARES &apos;08</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Condor -A Hunter of Idle Workstations</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Litzkow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miron</forename><surname>Livny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthew</forename><surname>Mutka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 8th Intl Conf. on Distributed Computing Systems</title>
				<meeting>8th Intl Conf. on Distributed Computing Systems</meeting>
		<imprint>
			<date type="published" when="1988-06">June 1988</date>
			<biblScope unit="page" from="104" to="111" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Software Security</title>
		<author>
			<persName><forename type="first">Gary</forename><surname>Mcgraw</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>Addison-Wesley</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Threat Modeling</title>
		<author>
			<persName><forename type="first">Frank</forename><surname>Swiderski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Window</forename><surname>Snyder</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Microsoft Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Distributed computing in practice: the condor experience</title>
		<author>
			<persName><forename type="first">Douglas</forename><surname>Thain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Todd</forename><surname>Tannenbaum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miron</forename><surname>Livny</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Concurrency -Practice and Experience</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2-4</biblScope>
			<biblScope unit="page" from="323" to="356" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Building Secure Software</title>
		<author>
			<persName><forename type="first">John</forename><surname>Viega</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gary</forename><surname>Mcgraw</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<publisher>Addison-Wesley</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
