<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Accumulating Evidence of Insider Attacks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Howard</forename><surname>Chivers</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Systems</orgName>
								<orgName type="institution">Cranfield University</orgName>
								<address>
									<settlement>Shrivenham</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Philip</forename><surname>Nobles</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Systems</orgName>
								<orgName type="institution">Cranfield University</orgName>
								<address>
									<settlement>Shrivenham</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Siraj</forename><forename type="middle">A</forename><surname>Shaikh</surname></persName>
							<email>s.shaikh@cranfield.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Systems</orgName>
								<orgName type="institution">Cranfield University</orgName>
								<address>
									<settlement>Shrivenham</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">John</forename><forename type="middle">A</forename><surname>Clark</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of York</orgName>
								<address>
									<settlement>York</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hao</forename><surname>Chen</surname></persName>
							<email>chenhao@cs.york.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of York</orgName>
								<address>
									<settlement>York</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Accumulating Evidence of Insider Attacks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C7658C01E6A9AE89CD9D743D2FDFCB3B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Insider attacks are often subtle and slow, posing the problem of integrating a large volume of event data from multiple sources over a long period. This paper proposes a scalable solution to combining evidence from multiple sources, by maintaining long-term estimates that nodes are subverted for each node in the system, rather than retaining event data for post-facto analysis. These estimates are then used as triggers for more detailed investigation. We identify essential attributes of event data, allowing the use of a wide range of sensors, and show how to apply Bayesian statistics to maintain incremental node estimates without global updating or normalization. The paper provides a theoretical account of the process, a worked example, and a discussion of its practical implications.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Insider attacks pose a particular threat because of the knowledge, access, and authority of their perpetrators <ref type="bibr" target="#b11">[12]</ref>. Such attacks often involve violations of physical or operational security, or the misuse of authority; they may also involve electronic attacks, in which case the 'electronic insider' is as big a threat as a person. It may be safer for a sophisticated external attacker to subvert an electronic system, often via social engineering, than directly subvert an employee. Such attackers may use technical means to camouflage an attack, such as indirection or address spoofing <ref type="bibr" target="#b0">[1]</ref>; however, their most potent weapon in avoiding detection is patience -the world's largest credit card fraud was achieved with a subverted internal system that avoided discovery for over 17 months <ref type="bibr" target="#b8">[9]</ref>.</p><p>Subtle attackers are unlikely to launch large-scale scans, or use known exploits; they will seek to avoid any action that can be immediately identified as an attack. However, they are likely to cause minor security events: an attacker may test known passwords, probe for services, or test new exploits, expecting to hide within the background of user errors, mistakes and other 'noise'. The problem of detecting such an attacker is therefore one of accumulating relatively D. Chadwick, I. You and H. Chang (Eds.): Proceedings of the 1st International Workshop on Managing Insider Security Threats (MIST2009), Purdue University, West Lafayette, USA, June 16, 2009. *Copyright is held by the author(s)* weak evidence over a long period. This issue is one of the 'grand challenges' of the internal attacker problem: "to combine events from one or more sensors, possibly of various types" while "reduce[ing] data without adversely impacting detection" <ref type="bibr" target="#b2">[3]</ref>. This paper provides a solution to this critical problem.</p><p>The work presented here is couched in terms of networks and systems, and the identification of a subverted node, which is part of a system that is used by a corrupt insider, or is acting as an electronic insider for some other party. However, the approach to characterizing and combining diverse sources of weak evidence is equally applicable to other problems in the insider space, such as identifying criminal or espionage threats from behavioral indicators.</p><p>This paper provides a process for combining evidence from various sources based on the application of Bayesian statistics, identifies attributes that must be available to allow the combination of evidence from different types of sensor, and demonstrates the approach with a simulated slow-attack on a network.</p><p>The paper is organized as follows: Section 2 is overview of the proposed approach, section 3 describes related work, and the evidential accumulation process is developed in section 4. Section 5 presents an example to show that this process is well behaved in simple cases, while section 6 simulates a challenging insider detection problem, and contrasts the evidence accumulation process with a common, but naive, alternative approach. Section 7 discusses results and open issues, and the paper is concluded in section 8.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Overview</head><p>Consider how a human investigator might approach the problem of accumulating evidence in the network of Figure <ref type="figure" target="#fig_0">1</ref>. The network consists of nodes (A...J) with interconnectivity as shown. Two minor security events are detected E1, and E2 ; they may originate from an operating system alert, intrusion detection system, or other form of event detection (see section 3). Given information about event E1 and the traffic in the network at the time, the investigator may determine that the nodes most likely to have originated the event are J, B, A, C or D. Similarly, when E2 occurs, at a much later date, the possible originating nodes are D, F and H. Intersecting these observations suggests node D as a common factor, and this may be sufficient to trigger intensive monitoring to determine if it is behaving maliciously.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>E1</head><p>The data used to identify these security events and their possible sources is necessarily transient; it may not be possible to record sufficient traffic to allow this analysis retrospectively. However, it is initially sufficient to just identify nodes that score differently; in the long, slow, game, it is only necessary to 'tip off' a further investigation by identifying one or more nodes whose behaviour may be unusual. It is not essential to record the events, the traffic from which they were identified, or even the graphs that identify possible sources, provided it is possible to somehow accumulate a 'score' for each node in the system.</p><p>This approach solves one of the critical issues in identifying slow attacks: how to maintain long-term state. Systems that try to model the behaviour of individuals, systems or protocols, are forced to retain large amounts of data, which limits their scalability. In the approach described here, the state size is a small multiple of the number of nodes in the network; this state is readily distributed, and its storage is feasible, even for organizations with global networks.</p><p>The 'score' that we propose for each node is the probability that the node is subverted, based on the application of Bayesian statistics. This naturally allows incremental updating, and translation of the problem frame from events, which are related to behaviour, to individual attackers. Simpler schemes, such as the event counting used to introduce this section, can be shown to be inferior, as demonstrated by the network simulation in section 6.</p><p>In summary, we propose that to identify subtle or inside attackers:</p><p>-The primary objective is to identify nodes for further investigation.</p><p>-Long-term state is restricted to an incremental estimate of the probability that each node is an attacker. -Node estimates are updated following every security event, taking account of transient network information that may be available at the time of the event.</p><p>This process is complementary to conventional intrusion detection using signatures or heuristics. There is no need to gradually accumulate evidence if the attack is evident; for example, high levels of network activity due to a worm or virus provide compelling evidence of an attack, and in these cases the secondary investigation is concerned with incident management, rather than confirmation. Section 4 describes how node scores can be estimated and maintained, following a brief summary of related work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Related Work</head><p>The use of a tiered approach to insider threat detection, detection followed by a more detailed forensic investigation, is proposed by Bradford et al <ref type="bibr" target="#b3">[4]</ref>. Users are profiled according to their function, and deviation from normal behaviour triggers more intensive data collection. Sequential hypothesis testing is proposed to determine whether a process is anomalous and more intensive data collection should be initiated. However, the authors do not show an implementation of their approach, and remark that it could not be carried out for "every user regardless", but itself requires a "triggering process".</p><p>The problem is the volume of data that must be maintained, and this is also a issue with datamining approaches, which are often proposed as an adjunct to intrusion detection or audit. Research proposals to alleviate the scalability issue include improving the quality of the raw data, by discovering better behavioral indicators <ref type="bibr" target="#b10">[11]</ref> or classifying input features <ref type="bibr" target="#b5">[6]</ref>, the latter using a Bayesian classifier. An alternative approach by Staniford et al <ref type="bibr" target="#b13">[14]</ref> is to selectively retain anomalous network data, with the aim of identifying slow network scans. Anomalous packets are identified based on heuristics developed from real scans. Other approaches include statistical filtering, primarily to reduce false alarm rates and support visualization <ref type="bibr" target="#b6">[7]</ref>. In essence, however, all these approaches require the storage of large volumes of event data for later analysis, and the authors themselves often identify scalability as a problem <ref type="bibr" target="#b10">[11]</ref>.</p><p>Aggregation as a means of detecting slow or stealthy attacks has been proposed by Heberlein <ref type="bibr" target="#b9">[10]</ref>. His assumption is that slow attacks are still systematic, and the attacker will eventually repeat the attack many times, possibly against different targets. Alerts are classified, accumulated, and displayed on a visualization grid, and any persistent activity which raises alerts of the same type over a long period, can be identified. Although similarly motivated, our work differs by accumulating evidence of attackers, not of incidents, removing the restriction that attackers need to repeat similar attacks. Heberlein's algorithm is also a counting process, which we show to be inferior to statistical reasoning.</p><p>Other work directed toward the insider problem is focussed on characterising an attacker's behaviour. The security indicators ('events') used may range from an individual's buying and travel preferences, to electronic alerts. For example, Burford et al <ref type="bibr" target="#b4">[5]</ref> propose a comprehensive framework of 'observables' that are used to build a model of individuals' behaviour via graph theory. Eberle et al <ref type="bibr" target="#b7">[8]</ref> develop graphs of behavioral events, such as phone calls, to identify subgraphs of normal behaviour, which are used to search for similar but anomalous occurrences. These approaches offer the advantage of modeling the potential attacker, and providing interesting insights into observable behaviour; however, their application may be limited by the computational cost of graph matching over large datasets, as well as by data scalability.</p><p>Most of the work described above is still formative; network intrusion detection, however, is established in the literature and supported by both open and propriety products <ref type="bibr" target="#b1">[2]</ref>. An intrusion detection system (IDS) uses a behavioral model of a system or protocol and detects anomalous events by either recognizing predefined signatures, or by heuristics. Both approaches have strengths and weaknesses, but despite the usefulness of IDSs in practice, they are hampered by a lack of scalability, and tend to generate large numbers of false positive alerts <ref type="bibr" target="#b1">[2]</ref>. From the perspective of this paper, IDSs are an effective way of generat-ing events which may indicate an attack, but are unable to maintain sufficient state to identify slow attacks. An IDS is not the only possible source of security events; for example, the behavioral events referenced above, operating system audit trails, and even Honeypots <ref type="bibr" target="#b12">[13]</ref>, which are security traps with no operational functionality, are all possible sources of security events.</p><p>In summary, the challenge of integrating information from many sources in order to identify patient internal attackers is still an important open question <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Updating Evidence</head><p>This section develops the detailed theory necessary to achieve the method outlined in section 3: to collapse the problem of attacker identification to updating a single score for each network node, or user. The section first outlines the evidential scenario, and the attributes required to characterize security events. Standard Bayesian updating is summarized, followed by the development of the process for updating evidence of insider attacks. Finally, the practical issue of relating this process to real security events is discussed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definitions</head><p>Node: This paper uses network terminology, without loss of generality to broader types of human or attack behavior. A node is a network component, such as a user's end system, a router, or a user. Event: An event is an alert that indicates a possible security violation; it may be an anomalous phone call, a failed connection, or something more certain, such as a known electronic exploit.</p><p>The evidential scenario is presented in Figure <ref type="figure">2</ref>. Node (a) is a network node, and (x) is an event which is detected somewhere in the network; there is some evidence that identifies the nodes that may have originated the event.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Event x P(Event is an Attack)</head><p>Node a P(H|x) H: Hypothesis that (a) is Subverted ... the set of nodes ... that may originate x Event y ...</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 2. Evidential Scenario</head><p>Event (x) may indicate an attack. Some security events are almost certainly attacks; however, there are many more that may be user mistakes, backscatter, or other forms of network 'noise'. For example, an attempt to connect to a nonexistent webserver is often a simple mistake, but could also be an attack probe.</p><p>In addition to uncertainty about the extent that an event is an attack, there may also be uncertainty about the source of the event. For example, the attacker may be able to spoof its network address, or the event may only be traceable to a subnetwork. In order to accumulate evidence from a wide range of different sources, they must be characterized by uniform parameters that describe these various attributes. We propose that the difference between different security events can be characterized by three parameters:</p><p>-P(Attack): the probability that an particular event (x) is actually caused by an intentional attack. -The Causal Node Set: the set of network nodes that could have caused the event, even if it was a not an attack. -P(Causal): the probability that the causal node is actually within the node set.</p><p>Given a sequence of events characterized by these parameters, we wish to investigate the hypothesis that a particular node is subverted, or acting as the agent of an attacker. We will first summarize the standard approach to Bayesian updating, then show how it can be applied in this case.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Bayesian Updating</head><p>Bayesian updating provides an estimate of the probability that hypothesis H is true, given an event, (x).</p><formula xml:id="formula_0">P (H|x) = P (x|H) • P (H) P (x)<label>(1)</label></formula><p>This theorem uses P(x|H), the probability of event (x) given that the hypothesis is true, to update the initial ('prior') estimate of the probability that the hypothesis is true, P(H). Simple updating of this type is often used in medical diagnosis; given knowledge of the probability of a symptom (the Event) given a disease (the Hypothesis), it provides a principled estimate of the likelihood of the disease given the symptom. It is essentially this change of reference framefrom symptom to cause -that is needed to identify internal attackers from their behaviour. The denominator, P(x), the probability of the event, is effectively a normalising factor. In many cases, including ours, it is difficult to estimate; however, by making use of P (H|x) + P (¬H|x) = 1, it is straightforward to eliminate P(x) and derive the following standard variant of Bayes' theorem:</p><formula xml:id="formula_1">P (H|x) = P (x|H) • P (H) P (x|H) • P (H) + P (x|¬H) • P (¬H)<label>(2)</label></formula><p>Practical applications of Bayes theorem often combine several sources of evidence. Achieving a usable update formula for multiple evidential events requires an assumption of conditional independence -that individual security events do not cause each other. The derivation is given in standard texts on Bayes. For example, the formulae, which gives the revised probability of the Hypothesis, H, given two items of evidence, (x) and (y) is:</p><formula xml:id="formula_2">P (H|x, y) = P (x|H) • P (y|H) • P (H) P (x|H) • P (y|H) • P (H) + P (x|¬H) • P (y|¬H) • P (¬H)<label>(3)</label></formula><p>This pattern can be extended to cope with multiple items of evidence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Combining Evidence from Security Events</head><p>The evidential scenario is described at the start of this section; in detail, we define:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>S</head><p>The set of all nodes in the system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>#S</head><p>The total number of nodes in the system. a,b... Particular network nodes. a, b, ..</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>. ∈ S H a</head><p>The Hypothesis that we wish to update: that a particular node (a) is subverted, or being used to mount an attack within the system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>E</head><p>The set of all Security Events generated over time. x,y... Particular events that may provide evidence of an attack.</p><p>x, y, ... ∈ E P x (Attack) The probability that a particular event (x) actually originates from an intentioned attack.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C x</head><p>The Causal set of nodes associated with event (x); in other words, the set of nodes that may have originated the event.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>#C x</head><p>The number of nodes in set</p><formula xml:id="formula_3">C x P(C x )</formula><p>The probability that C x includes the node that originated the event. In other words the accuracy with which C x is estimated.</p><p>The parameters P x (Attack), C x , and P(C x ) were introduced in the introduction to this section as the attributes needed to characterize an event.</p><p>We wish to update an estimate of the probability of H a , following event (x). In equation 2, above, the prior probabilities P (H), P (¬H) will depend upon the node (e.g. the prior probability of subversion of a server may be significantly different to that of a user client), but will otherwise be constant, so we can write P (¬H a ) = 1 − P (H a )</p><p>However, to obtain an estimate of P (x|¬H a )it is necessary to take into account that it may not be possible to attribute an event to a single node (a), but only identify a set of nodes, C x , from which the event may have originated. Unlike the prior probability, this is dependent on the event, as well as on the node, since different events will be generated by different sensors with different capabilities and views of the network.</p><p>There are three types of node to consider: the node currently being updated, (a), other nodes in the set C x , and other nodes in the system that are not in C x . As a consequence, for an event (x), there are two alternative hypotheses to H a : R a,x That node (a) is not an attacker, but is within C x . I a,x That node (a) is not an attacker, and is outside C x .</p><p>Since R a,x and I a,x are disjoint, we can write:</p><formula xml:id="formula_4">P (x|¬H a ) = P (x|R a,x ) • P (R a,x |¬H a ) + P (x|I a,x ) • P (I a,x |¬H a )<label>(4)</label></formula><p>If a node is not an attacker, we can expect the probabilities of the two alternative hypotheses to be a simple function of the numbers in each set. Substituting</p><formula xml:id="formula_5">P (R a,x |¬H a ) = #Cx #S and P (I a,x |¬H a ) = #S−#Cx #S</formula><p>into equation (4), we obtain:</p><formula xml:id="formula_6">P (x|¬H a ) = P (x|R a,x ) • #C x #S + P (x|I a,x ) • #S − #C x #S<label>(5)</label></formula><p>Substituting ( <ref type="formula" target="#formula_6">5</ref>), and the expression for P (¬H a )given above into the normalized version of Bayes theorem given in equation ( <ref type="formula" target="#formula_1">2</ref>), we obtain:</p><formula xml:id="formula_7">P (H a |x) = P (x|H a ) • P (H a ) P (x|H a ) • P (H a ) + [P (x|R a,x ) • #Cx #S + P (x|I a,x ) • #S−#Cx #S ] • [1 − P (H a )] (6) Defining: Δ a,x = P (x|H a ) P (x|R a,x ) • #Cx #S + P (x|I a,x ) • #S−#Cx #S<label>(7)</label></formula><p>Rearranging equation ( <ref type="formula">6</ref>) and substituting in Δ a,x gives:</p><formula xml:id="formula_8">P (H a |x) = Δ a,x • P (H a ) Δ a,x • P (H a ) + [1 − P (H a )]<label>(8)</label></formula><p>This update formulae can be extended to multiple events under the assumption of conditional independence, similar to equation (3); the resulting update formulae becomes:</p><formula xml:id="formula_9">P (H a |x, ...y) = Δ a,x • ... • Δ a,y • P (H a ) Δ a,x • ... • Δ a,y • P (H a ) + [1 − P (H a )]<label>(9)</label></formula><p>This is the final update formulae. Bayes updating is often presented in this form; the application specific elements are contained in the definition of Δa,x, which will now be further developed by substituting in the information that characterizes security events. We will consider each element of equation 7 in turn.</p><p>P(x|H a ) is the probability that an event (x) occurs given that the identified node is actually subverted. Since we will update in response to events, then for each event P x (Attack), provides a plausible value, which only discounts the possibility that false alarms may have originated from a genuine attacker.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>P(x|R a,x</head><p>) is the probability that the node is not subverted, but that it lies within the set of nodes that may have originated the attack. In terms of our model parameters, this corresponds to [1</p><formula xml:id="formula_10">− P x (Attack)] • P (C x ) • #Cx #S P(x|I a,x</formula><p>) is the probability that the node is not subverted, but that it lies outside the set of nodes that may have originated the attack. Of course, we would wish to correctly identify the range of nodes that may originate the attack, in which case this probability will be zero; however, in the real world we must take into account the possibility that C x cannot be enumerated with certainty, giving a value of:</p><formula xml:id="formula_11">[1 − P x (Attack)] • [1 − P (C x )] • #S−#Cx #S</formula><p>Substituting these parameters into Δ a,x , we obtain:</p><formula xml:id="formula_12">Δ a,x = P x (Attack) [1 − P x (Attack)] • P (C x ) • #Cx #S 2 + [1 − P (C x )] • #S−#Cx #S 2<label>(10</label></formula><p>) This ratio is used to update the probability that a node is an attacker.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Updating in Practice</head><p>Equations ( <ref type="formula" target="#formula_9">9</ref>) and ( <ref type="formula" target="#formula_12">10</ref>) provide the necessary theory to achieve the objective of discarding the details of security events, while retaining a simple score for each node which summarises the evidence that the node is an attacker. A naive algorithm to achieve this would be:</p><p>1. Initialize each node with its prior probability, P (H a ). Values may be suggested by survey data; in large systems a prior assumption that a small number of nodes (e.g. 10 of 10,000) are likely to be subverted is reasonable. This parameter is of most value if different nodes have significantly different prior probabilities, for example the difference between a router and a laptop. 2. For each security event:</p><p>(a) Establish the distinguishing parameters (the probability that it is an attack, the nodes that may have originated the attack, and the probability that the node set contains the attacker). (b) Calculate Δ from equation <ref type="bibr" target="#b9">(10)</ref>. (c) Multiply the node score by Δ for each node in the set, but not for any others in the system. 3. When required, substitute the node score (the product of all Δs) into equation ( <ref type="formula" target="#formula_9">9</ref>) to obtain the probability that the node is an attacker.</p><p>A feature of accumulating evidence in this way is that, assuming the evidence collected is generally useful (a true positive is more likely than a false positive), then over a long period the probabilities converge towards unity. However, we are only concerned with comparative scores, in order to identify nodes that are distinctive and require further investigation. In practice, then, it is sufficient to use Logarithmic scores, simply adding Log(Δ) to each node indicated by an event. Equation ( <ref type="formula" target="#formula_9">9</ref>) can still be reconstructed from this information, but more usually, the highest node score is chosen for further investigation.</p><p>The reader may be wondering about the value of calculating Δ at all at this stage, since we simply add its logarithm to the score for indicated nodes. However, this differs significantly from a simple counting algorithm, where the score for each node is incremented when it is identified as the possible source of a security event. The update value, Δ, characterizes exactly how much evidence is provided by each event. This important distinction is illustrated in the worked example presented in section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Simple Example</head><p>Before showing a simulation of a realistically difficult example (see section 6), this section explores if the evidential accumulation process has intuitively appealing behaviour; in particular, given a single sub-network, in which the sender can be readily identified:</p><p>-Does the evidential process identify an attacker sending at a slightly higher rate than the background of errors from normal nodes? -If the rate of attack increases, is the process stable, and does it enable the attackers to be identified earlier? -Does the process accommodate multiple attackers with different rates of attack (i.e. can one node hide behind another's attack)?</p><p>We assume a single sub-net of 50 nodes, in which the originating node of an event can be identified (i.e. C x =1); we assign P (Attack) an arbitrary probability of 0.083. Time is divided into slots (e.g. single minutes) and the average background rate of random innocent events that may be misinterpreted as attacks is 1/50 per node -in other words, one event per minute. Three nodes within the sub-net are designated attackers, and they generate random attacks at rates of 2, 4 and 8 times the total background rate.</p><p>The scores resulting from simulating this scenario are shown in Fig. <ref type="figure" target="#fig_1">3</ref>. All three attack nodes are well distinguished from the background level of events, which is indicated by the 'other nodes' result, which is the score for a typical innocent node. As would be expected, if the attack rate is higher, the discrimination improves. The accumulation of evidence is well behaved, and the higher rate nodes do not interfere with the accumulation of evidence relating to attackers operating at a lower rate.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Insider Attack Simulation</head><p>This section presents an example of evidence updating in practice. The example shows that complex network propositions can be accommodated straightforwardly, and contrasts the principled accumulation of evidence with a simple counting scheme. This example includes features that are common in this problem space, including: An important practical issue is the estimation of the three parameters that characterize a security event; relating these to actual systems and assessing the need for accuracy is subject to ongoing study. To date it has been possible to achieve realistic results by assigning P(Attack) as a fixed value for a given sensor within a deployment context, and by creating a simple rule-set that maps the network connection associated with an event to a set of nodes, giving C x and P (C x ), depending on the configuration and protocol.</p><p>The network used in this example is given in Fig. <ref type="figure" target="#fig_2">4</ref>. This network has 200 nodes, most of which are user systems located in four separate client subnetworks. Two of these sub-networks have nodes that been subverted and are attacking the system. The purpose of dividing the clients into several sub-nets (apart from the fact that this is a common configuration) is to contrast the detectability of attackers in different sized sub-networks, given that we assume that in many cases it will be possible to identify only the sub-net from which an attack originated. This arrangement allows us to investigate the scores accrued for an attack node (3 or 53) versus other nodes in the same sub-net, and nodes in a control sub-net of the same size and performance with no attacker.</p><p>Most of the traffic in the system is between the clients and servers, via the core network. Router and firewall detail is not shown, and because the object is to investigate evidence accumulation rather than event generation we model a two unspecified types of security event: those that can be detected within client sub-networks, and events in the server farm. For example, an event could be an attempt to connect to an exploitable network port.</p><p>Attackers are expected to generate security events at a rate that is much lower than the background rate of 'mistakes' by normal clients, in order to remain undetected. In the simulation below, time is measured in arbitrary clocks (e.g. minutes), and the probability of a normal client generating a security alert in any time slot is 1/150; in other words the system suffers an average of one false alarm every minute. In contrast, attackers generate events at a rate of 1/25; one event every 25 minutes.</p><p>In addition to the low attack rate, to further avoid detection, attackers use address spoofing. Events detected outside the sub-net containing the attacker can only be assigned to the whole sub-net. Only events identified within the sub-net containing the attacker (i.e. directed toward nodes within that sub-net) can be traced to a specific node.</p><p>An outline calculation illustrates the difficulty of this problem. Consider the attacker in sub-net A. Viewed from outside, the sub-net can be expected to generate innocent background events (false alarms) at a rate of one every 6 minutes (P()=25 * 1/150 ). The events generated by the attacker are distributed at random across the network, so of these, 25/200 are towards the attacker's own sub-network, and 175/200 are visible externally. This results in an externally visible attack every 29 minutes (P()=1/25 * 175/200 ), and these events can only be identified with the whole sub-net. Events targeted at random to nodes within the sub-net can be identified to a particular attacker, but these occur at a rate of only one every 200 minutes (P()=1/25 * 25/200 ). Of course, given this information the reader could devise a solution to identify the attacker, but the problem addressed here is how to use all the available information when the location of the attacker and the traffic patterns are unknown in advance.</p><p>In summary, the event parameters used in the simulation are:</p><p>C x contains all the nodes in the source sub-net, unless the destination of the network message that caused the event is in the same subnet as the source, in which case C x contains just the source address. P (C x ) is set to unity, since C x includes all the possible source nodes. P (Attack) is set to 0.33 for all locations except the server nodes, for which a value of 0.083 is assigned. (These are arbitrary, for the sake of demonstration. It seems plausible that an incident at a location to which most of the traffic is directed is less likely to be an attack, but in practice that is dependent on the actual event. The only special feature in the choice of value is avoiding fractions such as 25/150 that match the system topology and may produce anomalous results in a small system. Varying these parameters result in different scores, but not at the expense of overall discrimination.)</p><p>A network simulator was used to generate random traffic as outlined above, and the scores for the resulting security events were accumulated as described in section 4. The results are shown in Fig. <ref type="figure" target="#fig_3">5</ref>. The results show that insider attacks can be clearly distinguished from background noise in the system. A longer running simulation, given in Fig. <ref type="figure">6</ref>., provides a view of the asymptotic performance of the process.</p><p>For each size of sub-net the proposed scoring clearly distinguishes the attacker as an individual, and the sub-net containing the attacker, from the control sub-Fig. <ref type="figure">6</ref>. Long Term Performance net with no attacker. The distinction between different sized sub-nets is not fully maintained: both attackers are well distinguished from both control networks, however, the smaller sub-net containing an attacker is not well distinguished from the individual attacker in the larger sub-net. In practice, grouping the small subnet with the two attackers does not present a problem, since it still provides a correct diagnosis of the attacks, that can be subject to further investigation. We conjecture that the issue here is not that the scoring method is inherently biased between different sizes of C x (that is, any more than the actual evidential content varies with C x ), but that the larger sub-networks in this system are a substantial fraction of the system size.</p><p>The effectiveness of this Bayesian approach can be judged by comparison to the counting algorithm used to introduce section 2, and adopted by some researchers. Assuming that the same deductions can be made from the security events (specifically that C x is the same for each event), the result of using a counting approach, where node scores are simply incremented if they are identified, is given in Fig. <ref type="figure" target="#fig_4">7</ref>.</p><p>On a realistic problem, the counting approach fails in almost every respect. Attackers are not distinguished from other nodes in their sub-net, and there is little difference between a sub-net containing an attacker, and a control sub-net with no attacker. Instead, the primary distinction is between nodes on the basis of network size; essentially the larger sub-nets generate more background traffic, so receive a proportionately higher score.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Discussion</head><p>The proposed updating process is effective because it relates event evidence to the hypothesis that the node (or user) is an attacker. This change of reference frame allows event data to be discarded, while retaining the weight of evidence for attackers. The process scales linearly with the number of nodes in the system, and is likely to be applicable to a very wide range of systems and circumstances.</p><p>The updating ratio, Δ, can be thought of as the ratio of true positives to false positives. However, Bayes has been used, rather than the simple probability ratios that would be suggested if information theory was employed, in order to effect the change of viewpoint from the event to the attacker. Δ takes account of ancillary information such as the number of nodes that are indicated by the event, and the degree of certainty in their estimation.</p><p>Δ can be used as a figure of merit for sources of information; essentially, if Δ is consistently fractional for a sensor, then the resulting events will degrade the quantity of available information, rather than improve it.</p><p>The attributes described in section 4 (probability of attack, possible sources, and likelihood that the attacker is in this set) are not specific to any particular type of event generator, and can be applied at different levels of abstraction, if necessary within the same system.</p><p>There are a number of practical considerations that are subject to ongoing study. The first implementation decision is which real components are regarded as 'nodes': should nodes model all network components, just routing components and endpoints, or just endpoints such as clients, servers or users? To date, only endpoint nodes have been considered; this decision is based on the prior probability of network components originating attacks, and the convenience in associating events with their possible sources.</p><p>A key practical issue is how to determine which nodes are a potential source of any particular event, and to what degree. Ideally this assessment would be evidence-based using recent network history, but although this is feasible in principle, it is an open question if this can be achieved in practice. However, even simple strategies, such as the one used in section 6, provide demonstrable benefit.</p><p>This research is ongoing, and other open issues include the sensitivity of the assignment of P (Attack) for disparate sensors, and the possibility of decision criteria other than the maximum score function used above.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">Conclusion</head><p>This paper provides a solution to a critical problem in insider attacker discovery: how to combine events from multiple sensors, and manage the data explosion that is otherwise needed to support the identification of long-running attacks.</p><p>The key concept is to move away from maintaining models and evidence of behaviour, and instead maintain an incremental assessment for every user/node in the system that the node is an attacker. This approach is extremely scalable; the updating algorithm is soundly based in Bayesian statistics, and avoids the need for global updating or normalization. The approach is well behaved, in the sense that higher volumes of attack make detection easier, and in a worked example which includes several of the difficulties faced in practice, it significantly outperforms counting algorithms (see section 6).</p><p>In addition, this work identifies the attributes or parameters that need to be standardized for disparate sources of security event to be combined, allowing the use of a wide range of different sensors, at different levels of abstraction. The key criteria for a sensor (see section 7) is that it tends to provide information rather than add confusion, and a side effect of the updating process presented here is a criteria for deciding when this is the case.</p><p>Research on this approach is ongoing, both using simulation and relating the work to real sensors; some of the open questions are described in section 7.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Intersecting Evidence</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Simple attacker scenario in a single sub-network</figDesc><graphic coords="11,141.12,65.85,332.52,188.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Test Network</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Network Simulation Results</figDesc><graphic coords="13,153.12,276.43,309.02,173.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. Counting Algorithm Performance</figDesc><graphic coords="15,153.12,66.31,309.02,173.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="14,152.64,66.31,309.52,173.51" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">CERT incident note IN-98-05: Probes with spoofed IP addresses</title>
				<imprint>
			<date type="published" when="1998-11-24">24 November 1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Intrusion detection systems (IDS)</title>
		<author>
			<persName><forename type="first">Rebecca</forename><surname>Bace</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Mell</surname></persName>
		</author>
		<idno>SP 800-31</idno>
		<imprint>
			<date type="published" when="2001">2001. 2001</date>
		</imprint>
		<respStmt>
			<orgName>National Institute of Standards and Technology (NIST)</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Understanding the insider threat</title>
		<author>
			<persName><forename type="first">Richard</forename><forename type="middle">C</forename><surname>Brackney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">H</forename><surname>Anderson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of March 2004 Workshop</title>
				<meeting>March 2004 Workshop</meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
		<respStmt>
			<orgName>RAND National Security Research Division</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Towards proactive computer-system forensics</title>
		<author>
			<persName><forename type="first">Phillip</forename><forename type="middle">G</forename><surname>Bradford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marcus</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Josh</forename><surname>Perdue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bonnie</forename><surname>Self</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Information Technology: Coding and Computing (ITCC 2004)</title>
				<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="648" to="652" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Insider threat detection using situation-aware MAS</title>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">F</forename><surname>Buford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lundy</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gabriel</forename><surname>Jakobson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Conference on Information Fusion</title>
				<meeting><address><addrLine>Cologne, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Feature deduction and ensemble design of intrusion detection systems</title>
		<author>
			<persName><forename type="first">Srilatha</forename><surname>Chebrolua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ajith</forename><surname>Abrahama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johnson</forename><forename type="middle">P</forename><surname>Thomas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers and Security</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="295" to="307" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Statistical profiling and visualization for detection of malicious insider attacks on computer networks</title>
		<author>
			<persName><forename type="first">Jeffrey</forename><forename type="middle">B</forename><surname>Colombe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gregory</forename><surname>Stephens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The 2004 ACM Workshop on Visualization and Data Mining for Computer Security</title>
				<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="138" to="142" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Insider threat detection using graph-based approaches</title>
		<author>
			<persName><forename type="first">William</forename><surname>Eberle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lawrence</forename><surname>Holder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Cybersecurity Applications &amp; Technology Conference For Homeland Security (CATCH)</title>
				<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="237" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">TJX breach was twice as big as admitted, banks say</title>
		<author>
			<persName><forename type="first">Dan</forename><surname>Goodin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Register</title>
		<imprint>
			<date type="published" when="2007-10-24">24 October 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Tactical operations and intelligence: Sensor purpose and placement</title>
		<author>
			<persName><forename type="first">Todd</forename><surname>Heberlein</surname></persName>
		</author>
		<idno>TR-2002-04.02</idno>
		<imprint>
			<date type="published" when="2002-09-09">9 September 2002 2002</date>
			<publisher>Net Squared, Inc</publisher>
		</imprint>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Detecting insider threats by monitoring system call activity</title>
		<author>
			<persName><forename type="first">Nam</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Reiher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geoffrey</forename><forename type="middle">H</forename><surname>Kuenning</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Workshop on Information Assurance</title>
				<meeting><address><addrLine>West Point</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2003">2003. 2003</date>
			<biblScope unit="page" from="18" to="20" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">Marisa</forename><surname>Reddy Randazzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dawn</forename><surname>Cappelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michelle</forename><surname>Keeney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eileen</forename><forename type="middle">U S</forename><surname>Kowalski</surname></persName>
		</author>
		<title level="m">secret service and CERT coordination center/SEI insider threat study: Illicit cyber activity in the banking and finance sector</title>
				<imprint>
			<date type="published" when="2004-08">August 2004</date>
		</imprint>
		<respStmt>
			<orgName>Software Engineering Institute, Carnegie Mellon University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Honeypots: Catching the insider threat</title>
		<author>
			<persName><forename type="first">Lance</forename><surname>Spitzner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">19th Annual Computer Security Applications Conference (ACSAC &apos;03)</title>
				<imprint>
			<publisher>IEE Computer Society</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="170" to="179" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Practical automated detection of stealthy portscans</title>
		<author>
			<persName><forename type="first">Stuart</forename><surname>Staniford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">A</forename><surname>Hoagland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><forename type="middle">M</forename><surname>Mcalerney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer Security</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">1/2</biblScope>
			<biblScope unit="page" from="105" to="136" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
