=Paper= {{Paper |id=Vol-1788/STIDS2016_T03 |storemode=property |title=Developing an Ontology for Individual and Organizational Sociotechnical Indicators of Insider Threat Risk |pdfUrl=https://ceur-ws.org/Vol-1788/STIDS_2016_T03_Greitzer_etal.pdf |volume=Vol-1788 |authors=Frank L. Greitzer,Muhammad Imran,Justin Purl,Elise T. Axelrad,Yung Mei Leong,D.E. Becker,Kathryn B. Laskey,Paul J. Sticha |dblpUrl=https://dblp.org/rec/conf/stids/GreitzerIPALBLS16 }} ==Developing an Ontology for Individual and Organizational Sociotechnical Indicators of Insider Threat Risk== https://ceur-ws.org/Vol-1788/STIDS_2016_T03_Greitzer_etal.pdf
        Developing an Ontology for Individual and
     Organizational Sociotechnical Indicators of Insider
                        Threat Risk
    Frank L. Greitzer1, Muhammad Imran2, Justin Purl3, Elise T. Axelrad4, Yung Mei Leong5, D.E. (Sunny) Becker3,
                                     Kathryn B. Laskey2, and Paul J. Sticha3
                                                       1
                                                    PsyberAnalytix, Richland WA, USA
                                                 2
                                                George Mason University, Fairfax, VA, USA
                                    3
                                      Human Resources Research Organization, Alexandria, VA, USA
                                              4
                                                Innovative Decisions, Inc., Vienna, VA, USA
                                            5
                                              Independent Consultant, Hyattsville, MD, USA
               Frank@PsyberAnalytix.com, mimran4@gmu.edu, JPurl@humrro.org, eaxelrad@innovativedecisions.com,
                       y.leong03@gmail.com, sbecker@humrro.org, klaskey@gmu.edu, psticha@humrro.org


Abstract—Human behavioral factors are fundamental to under-                     methods for detecting and preventing insider threats [3]. The
standing, detecting and mitigating insider threats, but to date                 present research is directed toward a systematic and
insufficiently represented in a formal ontology. We report on the               comprehensive representation of concepts in the insider threat
design and development of an ontology that emphasizes individu-                 domain that will support reasoning and threat assessment
al and organizational sociotechnical factors, and incorporates                  models.
technical indicators from previous work. We compare our ontol-
ogy with previous research and describe use cases to demonstrate
how the ontology may be applied. Our work advances current                                             II.   BACKGROUND
efforts toward development of a comprehensive knowledge base                         Research on insider threat has sought to develop models
to support advanced reasoning for insider threat mitigation.                    and tools to identify individuals who pose increased insider
                                                                                threat risk. Most mitigation approaches focus more narrowly on
   Keywords— insider threat; sociotechnical indicators ontology;                (a) detecting unauthorized user activity and anomalous activity
domain knowledge representation; SME knowledge modeling; hu-                    that may be malicious; and (b) preventing data exfiltration.
man behavioral modeling; domain knowledge sharing
                                                                                Typical approaches attempt to prevent unauthorized access
                                                                                through the use of firewalls, passwords, and encryption. That
                            I.    INTRODUCTION                                  is, they are primarily based on the tools and technology used to
    Government and corporate organizations alike recognize                      thwart external attacks. Unfortunately, these security measures
the serious threat posed by insiders who seek to destroy, steal                 will not prevent authorized access by an insider.
or leak confidential information, or act in ways that expose the                    Because a key element of insider threat is a “trusted”
organization to outside attacks. A widely accepted definition of                perpetrator with authorized access to organizational assets,
the insider threat is “a current or former employee, contractor,                monitoring and analysis approaches should not only address
or other business partner who has or had authorized access to                   suspicious host/network activities (identifying so-called
an organization’s network, system, or data and who                              technical indicators) but also seek to identify broader aspects of
intentionally (or unintentionally) exceeds or misuses that                      human behavior, motivation, and intent that may characterize
access to negatively affect the confidentiality, integrity, or                  malicious insider threats. Thus, as noted in [4], approaches
availability of the organization’s information or information                   should seek to identify attack-related behaviors that include
systems” [1]. More generally, the insider threat may be defined                 deliberate markers, preparatory behaviors, correlated usage
in terms of internal risks to physical and human assets as well                 patterns, and even verbal behavior and personality traits, all of
as organizational information. In light of recent government                    which can be pieced together to detect potential insider threats.
initiatives, Executive Order 13587 [2], and the National Insider                While a number of researchers [5-9] recommend including
Threat Policy that specifies minimum standards for establishing                 behavioral indicators that may be accessible to organizations
an insider threat program, there is increasing acknowledgment                   prior to an attack, tools and methods that incorporate formal
of the need to develop formal frameworks to represent and                       representations of these human behavioral factors are rare
analyze vast amounts of data that may be collected by insider                   (exceptions are models described in [10-12]). The research and
threat monitoring and mitigation systems. There is a notable                    operational security communities require a comprehensive
lack of standards within the insider threat domain to assist in                 knowledge base of technical and behavioral indicators to
developing, describing, testing, and sharing techniques and                     stimulate the development of more effective insider threat
Research reported here was supported under IARPA contract 2016-                 mitigation systems. Existing ontologies include a knowledge
16031400006. The content is solely the responsibility of the authors and does
not necessarily represent the official views of the U.S. Government.



                                                           STIDS 2016 Proceedings Page 19
base for technical indicators of insider threat [3][13] and a hu-   reflect behaviors, attitudes, personal issues, sociocultural or
man factors oriented ontology for cybersecurity risk [14]; our      ideological factors, and various biographical factors that may
work extends [13] and complements [14] by further specifying        indicate increased risk. The individual level also differentiates
individual human and organizational sociotechnical factors.         psychological traits from dynamic states, consistent with find-
                                                                    ings that these two constructs are reliably distinct despite their
                       III.   OBJECTIVES                            admitted overlap (e.g., [15-16]) and with the diverse body of
                                                                    psychological research that hinges on (e.g., [17-19]) or capital-
    The objective of this research is to develop a formal           izes on (e.g., [20-21]) that distinction. This detailed branch of
representation of our current understanding of factors              the taxonomy reflects a substantial body of work by a diverse
underlying insider threats, particularly relating to individual     set of researchers and practitioners focusing on psychosocial
behavioral and psychological indicators and constructs reflect-     factors underlying insider threats (e.g., [5], [7-9], [22-33]). The
ing organizational factors. The work to date complements and        constructs that comprise this branch are listed in Table I, which
extends extant insider threat ontology frameworks. First, it adds   shows the main factors (or classes) in column 1 and sub-classes
substantial detail (depth) to existing insider threat ontology      (in italics) in column 2. Column 2 also includes illustrative
frameworks that focus on cyber/technical constructs. Second, it     descriptions or instances that reflect lower-level constructs (not
defines formal ontological representations of individual and        exhaustive). In column 1 we also indicate a count of the total
organizational sociotechnical constructs, which are insufficient-   number of constructs defined at the leaf node level for each
ly represented in current ontological frameworks. The use of a      class, to provide a sense of the extensiveness of the taxonomy.
formal, standardized language (ontology) for expressing
knowledge about the insider threat domain facilitates
information sharing across the insider threat research                TABLE I.         CONSTRUCTS COMPRISING INDIVIDUAL HUMAN FACTORS
community and supports model development. A longer term              Class (a)
                                                                                         Sub-Class and Instances
goal is to inform the development of ontology-based reasoning        Concerning          Boundary Violation -- Concerning work habits, attend-
systems and models to support insider threat detection and           Behaviors           ance issues, blurred personal/professional boundaries,
mitigation. Adopting and using more comprehensive, formal            (140)               threatening/intimidating behaviors, boundary probing,
                                                                                         social engineering, minor policy violations, travel policy
ontological representations will also facilitate the systematic                          violations, unauthorized travel, unauthorized foreign
construction of scenarios that may be used in exercising and                             travel, change in pattern of foreign travel, security viola-
validating insider threat detection models.                                              tions
                                                                                         Job Performance – Cyberloafing, negative evaluation
                                                                                         Technical/Cyber Violation – Concerns about: authentica-
                        IV.   APPROACH                                                   tion/ authorization, data access patterns, network patterns,
     Our approach consisted of (a) developing a hierarchical                             data transfer patterns, command usage, data dele-
taxonomy for insider threat risk that can be applied generally to                        tion/modification, suspicious communications
                                                                       Life              Criminal Record – Court records
all types of organizations; and (b) migrating the taxonomy into        Narrative         Financial Concerns – Lifestyle incongruities (unex-
a formal ontology for insider threat risk. Care was taken to           (34)              plained affluence, etc.), risky financial profile (bankrupt-
compare our representation with existing frameworks (par-                                cy, large expenses-to-income ratio, bounced/bad checks,
ticularly the ontology developed by Carnegie Mellon                                      credit problems)
University's Computer Emergency Response Team CERT                                       Personal History – Demographics, employment, educa-
[13]) to maximize consistency and interoperability among                                 tion background, major life events, health status, marital
                                                                                         history, U.S. Immigration/citizenship status
formulations across the research community. Our approach to
                                                                       Ideology          Disloyalty – Behaviors or expressions of disloyalty to the
ontology development seeks to extend the ontological                   (9)               organization or to the U.S. government [2, 6]
framework by incorporating probabilistic methods to express                              Radical Beliefs – Radical political beliefs, radical reli-
and reason with uncertainty, i.e., this work will inform the                             gious
development of a probabilistic ontology to support reasoning                             Unusual Contact with Foreign Entity –Unreported con-
about insider threat risk.                                                               tact with foreign nationals
                                                                       Dynamic           Affect – Excessive anger/hostility, disengagement, mood
                                                                       State             swings
A. Taxonomy Development                                                (14)              Attitude – Lack of motivation, overly competitive, ex-
    A well-defined taxonomy provides an initial hierarchy of                             presses feelings of disgruntlement with job, overly criti-
domain concepts as a starting point for our insider threat ontol-                        cal, resentful, defensive
ogy. The taxonomy is based on a systematic review, analysis            Static Trait      Personality Dimensions – Neuroticism, disagreeableness,
                                                                       (25)              low conscientiousness, excitement seeking, honesty-
and synthesis of existing research, case studies and guidelines                          humility on six-factor personality scale
that have been produced by the insider threat research                                   Other Personality Traits – Characteristics associated with
community. Continually being expanded at the leaf nodes, the                             maliciousness or vulnerability to exploitation (Machiavel-
current taxonomy is 6-7 levels deep. There are 262 unique fac-                           lianism, narcissism, psychopathy, sadism, authoritarian-
tors (leaf nodes) defined across the entire taxonomy: a total of                         ism, social dominance orientation)
223 constructs defined for the individual factors and 39 for the                         Temperament – Various temperament issues that may be
organizational factors. Our class structure overall contains                             observed/reported by coworkers – Big ego, callousness,
                                                                                         lack of empathy, lack of remorse, manipulativeness,
more than 350 constructs.                                                                rebelliousness, poor time management, preoccupation
    At the highest level we distinguish individual human                                 with power/grandiosity
                                                                    (a)
                                                                        In parentheses is the total # of sub-classes or instances populated to date
factors from organizational factors. Individual human factors           within the class




                                                 STIDS 2016 Proceedings Page 20
    Organizational factors focus on organizational and                                working). Our task has been to contextualize behaviors with
management practices, policies, and work setting                                      related concepts (e.g., underlying motivations and personality
characteristics that influence worker satisfaction, attitudes,                        traits) that allow the cataloging of information pertaining to
safety, or protection/vulnerabilities of assets. These factors                        both the insider threat incident and the insider. Through this
have received much attention by organizations that publish best                       catalogue of information, researchers and organizations can
practices—indicating situations or conditions that contribute to                      index cases and gain further insight into common attack vectors
an increased likelihood of insider threats within an                                  driven by human behavior. Our ontology extends previous
organization. Although they may play a role in triggering mali-                       work [3][13][14] in two ways: (a) adding more detail to the
cious or unintentional insider threats, these factors have not                        technical indicator branch of the ontology and (b) adding
generally been identified in insider threat ontologies to date.                       material focusing on individual behavioral and organizational
This branch of our taxonomy was constructed by consulting the                         factors.
broad and diverse literature on industrial/organizational
psychology and human error research, including [34-36] and                                Our approach is to migrate our taxonomy into a formal on-
relevant discussion of these factors in the context of workplace                      tology expressed in the popular OWL-DL ontology language.
violence and insider threat (e.g., [37-38]). Table II lists classes                   OWL-DL balances expressiveness (ability to represent many
                                                                                      kinds of domain entities and relationships), computational
and sub-classes defined to date for organizational factors.
                                                                                      properties (conclusions are guaranteed to be computable in
                                                                                      finite time), and functionality for drawing inferences from as-
   TABLE II.         CONSTRUCTS COMPRISING ORGANIZATIONAL FACTORS                     serted facts. Enumeration of (potentially hundreds of) Compe-
Class (a)
                             Sub-Classes                                              tency Questions (CQs) for our ontology serves as a require-
Security Practices           Communication/training                                   ments specification as well as a means of testing the ontology
(14)                         Policy clarity                                           implementation. An example of a simple CQ is “What are the
                             Hiring                                                   components of class Attitude?” A more complex CQ is “What
                             Monitoring                                               factors are associated with the observables attendance prob-
                             Organizational justice                                   lems, unauthorized personal use of work computer, and hos-
                             Implementation of Security Controls                      tile? The CQs may be evaluated using SPARQL queries. Our
Communication Issues         Inadequate procedures/directions
(2)                          Poor communications
                                                                                      OWL-DL implementation will enable automated inferences
Work Setting (Man-           Distractions                                             about class relationships. For example, from the assertion that
agement Systems)             Insufficient resources                                   an individual belongs to class Aggressive and class Manipula-
(6)                          Poor management systems                                  tive, the reasoning engine can infer that the individual fulfills
                             Job instability                                          the membership conditions of class Threat.
                             Lack of career advancement
                             Poor physical work conditions
                             Organizational changes                                                 V.    ONTOLOGY IMPLEMENTATION
Work Planning and            Job pressure/job stress
Control                      Time factors/unrealistic time constraints                A. Ontology Methods
(13)                         Task difficulty                                              Following widely recognized guidelines for ontology de-
                             Change in routine
                                                                                      velopment [39], we used the Methontology ontology engineer-
                             Heavy or prolonged workload
                             Insufficient workload
                                                                                      ing methodology [40], which enables construction at the con-
                             Conflict of work roles                                   ceptual level and allows for development, re-use, or re-
                             Work role ambiguity                                      engineering of existing ontologies. In the Specification phase
                             Lack of autonomy                                         we defined the purpose of the ontology, its intended uses and
                             Lack of decision-making power                            its end users. In the Conceptualization phase we structured the
                             Irregular timing of work shifts                          domain knowledge into meaningful graphical models. In the
                             Extended working hours                                   Formalization phase we represented our conceptual models as
                             Lack of breaks
                                                                                      a formal or semi-computable model. The Implementation phase
Mitigating Factors           Flexible work schedule
(4)                          Employee Assistance Plan
                                                                                      supports the ontology development in the Web Ontology Lan-
                             Effective staff training and awareness                   guage (OWL). Updates and corrections take place in the
                             Reporting mechanism                                      Maintenance phase. Our development also included supporting
(a)
    In parentheses is the total # sub-classes or instances populated to date within   Methontology activities of Knowledge Acquisition, Evaluation
    the class                                                                         (verification and validation that the ontology represents the
B. Ontology Development Approach                                                      domain), Integration (reuse of other available ontologies),
                                                                                      Documentation, and Configuration management. We also
    To date, insider threat ontology development has focused                          adopted IDEF5 methods in conceptualization and formalization
primarily on technical factors (e.g., [13]). In contrast, our                         phases to acquire knowledge and develop graphical knowledge
approach is grounded in an extended problem space that in-                            representation models. We implemented our taxonomy using
cludes methods, motivation, psychology, and circumstances of                          an off-the-shelf ontology development tool (Protégé).
human behavior. As noted by previous authors (e.g., [13]),
behavioral aspects of insider threat can be an extraordinarily                           By default, the Protégé tool does not assume that classes are
complex domain to model. There are many overlapping con-                              mutually exclusive. This is useful when concepts are most
cepts (e.g., state and trait anger), many providing little meaning                    meaningful in combination. For example, high absenteeism, a
in isolation (e.g., surfing the web vs. surfing the web instead of                    weak indicator by itself, is made stronger in association with




                                                               STIDS 2016 Proceedings Page 21
other concerning factors [32], but the risk is mitigated when
associated with documented illness, vacation or maternity
leave. As another example, relaxation of the assumption of
mutual exclusivity is especially useful when considering vari-
ous correlated psychological or personality characteristics such
as those defined in the Five Factor Model (FFM) of personality
traits [41]. There are numerous well-supported relationships
between dimensions of personality and various types of coun-
terproductive work behavior [28].

B. Description of the Ontology Classes
    We began by formalizing the hierarchy of concepts
provided by the taxonomy discussed in Section IV-A, and
translating the hierarchy into parent-child relationships of
classes in our ontology. Classes represent objects with similar
structure and properties Classes are arranged hierarchically;
those without further subcategories are termed leaf nodes.
Individuals in the ontology represent instances of classes.
Class relationships other than parent-child are derived from the
research literature, available material on insider threat cases,
and the experience and judgment of subject-matter experts
within the development team. As reuse of previous knowledge
models is a key advantage of ontologies and an encouraged
practice in ontology engineering, we included as much
information from previous work as possible, especially the
recent ontology developed by CERT [3][13]. In particular, the
Actor, Asset, Action, Event, Temporal Thing and Information
class structures are adopted in total. Selected classes from the                         Fig. 1. Top-Level Classes
Unified Cyber Security Ontology [42] were also incorporated
                                                                    regulations and policies that differ across industry sectors. The
into our ontology. For example the idea of “Consequence”
                                                                    Effect class captures information about the impact of the insider
class is adopted by our ontology but renamed to Outcome class
                                                                    criminal activity on the organization(s), for example the action
since this terminology is more consistent with the insider threat
                                                                    of injecting a virus into an enterprise network can induce a
cases scenario template used by CERT. The concepts of
                                                                    malfunction in other workstations on the network and/or a full
Vulnerability (e.g., [6]) and Catalyst/Trigger events (e.g., [43-
                                                                    network shutdown. The concept of the consequences of an
44]) are also formalized as classes in our ontology. To capture
                                                                    attack is captured by the Outcome class, for example the
the temporal information involved in insider threat cases, we
                                                                    shutdown of the network has an outcome of a halt of
imported the Temporal Interval class from the CERT ontology.
                                                                    organization’s operations and thousands of dollars of loss. The
    Figs. 1-3 show the hierarchy of classes in our ontology, as     Location class encapsulates geographic information about the
implemented in the Protégé tool. The ontology is derived from       source of an attack. The Insider Threat Risk class captures the
the extensive taxonomy described in Section IV-A. Due to            threat level that would be associated with the individuals of the
space constraints we depict only selected classes with detail       Actor class based on the inference performed over the
restricted to the 4th level of the hierarchy. A comparison of       ontology.
Tables I and II with Figs. 1-3, shows how the class hierarchy in
                                                                        Fig. 2 expands the Human Factor node of Fig. 1, and Fig. 3
the ontology represents the organization of domain concepts in
                                                                    expands the Organizational Factor node. Inspection of the
the taxonomy. Fig. 1 shows how the ontology accounts for both
                                                                    human psychosocial factors in Fig. 2 reveal classes (and
malicious and non-malicious (unintentional) insider threats.
                                                                    associated sub-classes) that correspond to elements of the
Importantly, we distinguish between actions performed by em-
                                                                    taxonomy. Acknowledging the Capability-Motive-Opportunity
ployees (as insiders) and actions performed by organizations
                                                                    (CMO) model (e.g., [4]), which postulates that the perpetrator
(which may, for example, include poor institutional policies
                                                                    of an attack must have the capability, motive, and opportunity
and/or security practices as well as inadequate or exacerbating
                                                                    to commit the attack, we include these constructs as classes in
responses to potential threats). At the same root level we also
                                                                    the ontology. Full implementation of CMO constructs is de-
include classes such as Industry, Insider Threat Risk, Effect,
                                                                    ferred for future efforts to define relationships among these
Location and Outcome as attributes of the organization.
                                                                    classes.
Industry may account for differences in organizational rules,




                                                 STIDS 2016 Proceedings Page 22
                Fig. 2. Human Factor Classes (Lower level details for Life Narrative and Ideology classes are not shown due to space constraints)


    The capability to conduct an attack is in part dependent on                 class (see Fig. 2)—as well as Organizational Factors (Fig. 3)
an individual’s knowledge/skills/abilities that are represented in              that may act as stressors or triggers that can motivate an attack.
certain human behavioral factors (cf [14]), particularly the Bio-
graphical Data subclass within the Life Narrative factors class.                    The sub-class Concerning Behavior, within the Human
Motive (or motivation) may be represented within the Intention                  Factor class, contains a large set of individual actions that
class (and its malicious or non-malicious subclasses) in Fig 1; it              includes the subcategories Job Performance, Boundary
is also related to psychological characteristics or                             Violation, and Cyber Security Violation. These in turn are bro-
predispositions such as Static Traits, Dynamic States, and Life-                ken down into more granular, lower-level constructs (shown in
Narrative factors (e.g., financial or health problems that may                  boxes); not shown are even lower levels of the hierarchy and
act as stressors)—which are sub-classes of the Human Factor                     individuals representing instances of the classes.




                                                        STIDS 2016 Proceedings Page 23
    The initial structure of the ontology grew out of the detailed                  behavioral factors and organization factors as well as
taxonomic structure that we developed based on subject-matter                       cyber/technical indicators. In the scenarios described, we use
expertise and our analysis/synthesis of research literature and                     [brackets] to identify significant indicators with actions de-
numerous case studies. A more robust and richer representation                      scribed in the scenario.
has been informed by exploring complex relationships among
constructs (e.g., classes, sub-classes, instances) spread across
multiple branches of the hierarchy. As a simple example, the                            Use Case #1 (see small text box) describes a simple cyber-
ontology recognizes that different types of attack are identified                   related insider threat incident. Use Case #2 (see large text box),
from their relationships with certain aspects of the                                which entirely subsumes the contextual and technical infor-
cyber/technical exploit (e.g., exfiltration requires certain                        mation regarding the insider threat incident described in the
actions performed on sensitive information, such as saving to                       first use case, injects additional human behavioral factors.
external media, printing, emailing, uploading to the cloud, etc.).
A more complex example may be considered in using the                                                                Use Case #1
CMO model (mentioned above) to reason about insider risk.                           John [PERSON: Insider X] is a long-time system administrator [LIFE
By incorporating knowledge of relationships among detected                          NARRATIVE: PERS HISTORY] [CAPABILITY] with access to sensitive
                                                                                    and classified information [OPPORTUNITY] in a company that performs
behaviors, individual behavioral factors, and organizational                        government-sponsored        R&D    [ORGANIZATION:          VICTIM
factors, the ontology allows reasoning about the risk associated                    ORGANIZATION].
                                                                                    John uses his personal web-based email account from his work computer to
                                                                                    communicate with prospective employers [DIGITAL ACTION: EMAIL
                                                                                    ACTION]. Then he uses his administrative privileges to access some sensitive
                                                                                    intellectual   property     information    [BUSINESS     INFORMATION:
                                                                                    INTELLECTUAL PROPERTY] that will be of interest to a competitor. John
                                                                                    saves these files to his computer [COMPUTER ASSET: WORK PC] and
                                                                                    copies the files to a thumb drive [CONCERNING BEHAVIOR:
                    Fig. 3. Organizational Factor Class                             TECH/CYBER VIOLATION–DIGITAL ACTION/COPY ACTION]
                                                                                    [PHYSICAL ASSET: USB DRIVE], which he then sneaks out of the office
with detected behaviors in the context of possible motives,                         with the intention of using the information to leverage a job offer with a
capabilities, and opportunity. Relationships and gaps (missing                      competitor [THEFT EVENT: DATA THEFT]. Subsequently John resigns
elements in classes) were further identified by exercising the                      and accepts a job offer from a competitor.
knowledge base using known or fictitious use cases.                                     It is evident that Use Case #1 lacks substantial contextual
                                                                                    information described in Use Case #2 regarding possible
C. Use Case and Application                                                         contributing or mitigating factors, relevant personal
    Use cases help to verify the comprehensiveness of the                           predispositions, or concerning behaviors that may be associated
knowledge representation and to identify missing or ill-defined                     with this individual’s insider threat risk. Fig. 4 is a concept map
classes and relationships. In this section, we demonstrate the                      depicting Use Case #2, showing all the behavioral and
application of the ontology to use cases that include human                         technical concepts and their associated relations. The dashed
                                                                             Use Case #2
 John [PERSON: Insider X] is a long-time system administrator [LIFE NARRATIVE: PERS HISTORY] [CAPABILITY] with access to sensitive and
 classified information [OPPORTUNITY] in a company that performs government-sponsored R&D [ORGANIZATION: VICTIM ORGANIZATION]. The
 following input was recorded in his personnel file: (1) One colleague states that John discounts the opinions of colleagues and he becomes hostile when
 colleagues discuss and critique his ideas [STATIC TRAIT: TEMPERAMENT: RESISTS CRITICISM] [DYNAMIC STATE: AFFECT—HOSTILE]. (2)
 A different colleague states that John seeks to control all aspects of a project and often insists on dominating the conversation about project tasks and approach
 [STATIC TRAIT: OTHER PERSONALITY DIMENSIONS—AUTHORITARIANISM]. (3) His manager corroborates these inputs and adds that John
 tends to become argumentative and irritated, and defensively cites his superior knowledge of industry best practices when others criticize his rigid protocols
 [DYNAMIC STATE: AFFECT–HOSTILE] [STATIC TRAIT: TEMPERAMENT—BIG EGO]. Staff development/performance review assessment
 includes criticism by colleagues that portions of his protocols are idiosyncratic with weak rationale, and that his rigid protocols have impacted company projects
 [CONCERNING BEHAVIORS: JOB PERF—NEGATIVE PERF EVALUATION].
 John was passed over for a promotion to manage a new, prestigious project [LIFE NARRATIVE: PERS HISTORY: EMPLOYMENT–PASSED OVER
 FOR PROMOTION]. He files a complaint with HR claiming unfair treatment and his manager, compelled to meet with him, comes away with the impression
 that John still harbors resentment over not being promoted. John’s most recent evaluation cited a decline in performance [CONCERNING BEHAVIORS:
 JOB PERF—NEGATIVE PERF EVALUATION]; since being denied the promotion his attitude has been increasingly disgruntled [DYNAMIC STATE:
 ATTITUDE—DISGRUNTLEMENT]; and that there were multiple complaints from coworkers about frequent tardiness [CONCERNING BEHAVIORS:
 BOUNDARY VIOLATION—ATTENDANCE]. The attendance problem led to a formal, written warning [CONCERNING BEHAVIORS: BOUNDARY
 VIOLATION–POLICY VIOLATION]. After getting the warning, John talks to his manager and loses his cool—storming out of the office [DYNAMIC
 STATE: AFFECT–HOSTILE]. A colleague hears John’s outburst and tells the manager about John’s recent marital separation to provide some context to
 Johns behavior [LIFE NARRATIVE: PERS HISTORY—MAJOR LIFE EVENTS/RECENT CHANGE IN MARITAL STATUS (MARITAL
 SEPARATION)]. The incident prompts the manager to contact the company Security Office. The Security Office checks the local court records to learn that
 three weeks ago, John was arrested for allegedly driving under the influence (his first contact with the criminal justice system) [LIFE NARRATIVE:
 CRIMINAL RECORD—DUI].
 Faced with these job and personal stressors, John begins to seek work with a competitor. John contacts a competitor to see if they are interested in him and in
 proprietary information he can provide. To avoid being noticed, John carries out email dialogue with the competitor by logging into his personal Yahoo web
 mail account from his work computer [CONCERNING BEHAVIORS: JOB PERFORMANCE—CYBERLOAFING]. Next, John carries out the insider
 threat attack and resigns, as described in second paragraph of Use Case #1.




                                                           STIDS 2016 Proceedings Page 24
     Fig. 4. Concept map representation of Use Case #2 (Case #1 is within dashed box). Not all concepts and relations are shown due to space limitations.

box in Fig. 4 represents Use Case #1 (due to space limitations,                  only describes technical/cyber events, our ontology also in-
not all details are shown). In a real scenario, detecting                        cludes non-technical or sociotechnical constructs that reflect
concerning behaviors or other factors may require multiple                       actions and psychosocial indicators of persons of interest. As a
factors to meet threshold requirements for alerts—these are not                  specific example, consider the class Concerning Behaviors. A
described or represented here due to space constraints. Events                   concerning behavior such as “Leaving a classified security
depicted in the use case scenarios are numbered chronological-                   container unlocked and unattended” can be described using two
ly. Shown in the lower right side of the figure is a timeline                    concepts in the CERT ontology: an Asset (e.g., Classified file)
(spanning several months for illustrative purposes) suggesting                   and an Action (e.g., Unlock). However, this may not be the
that monitoring of sociotechnical factors may help achieve pro-                  focal event, or a precipitating event, in a case description, and
active mitigation goals (getting “left of the boom”).                            there may be other related contributing factors. For example, a
                                                                                 previous condition (e.g., organizational reduction in
           VI.     COMPARISON WITH RELATED WORK                                  force/layoffs) or individual predispositions (e.g., personality
                                                                                 traits, personal stress) may lead to actions that reflect a lack of
    The focus of our effort is to express and represent individu-                diligence or motivation in an actor who later commits an act of
al and organizational sociotechnical factors in an ontological                   insider threat (these contributing factors are in part identified in
characterization of insider threat risk (e.g., [13-14]). Our ontol-              the cybersecurity human factors ontology (HUFO) by [14]. The
ogy provides a more robust, richer description of not only the                   CERT ontology, in particular, does not connect these
nature of the attack but also possible contributing factors that                 behavioral constructs to technical/cyber actions that comprise
more fully describe the insider threat to the organization. CERT                 the actual exploit.
[13] began with a database of insider threat case descriptions.
The information framework underlying this database informed                          At a basic level, the Factor class, which contains much of
the vocabulary in the ontology. Namely, organizations grant                      the vocabulary in our ontology, can be placed alongside Assets
access to persons that perpetrate events that harm the                           in the CERT ontology. Both are non-temporal classes that a
organization. Persons and Organizations are the actors in the                    person can possess (i.e., Things). We integrated the two ontol-
CERT ontology, and their actions culminate in events (i.e.,                      ogies and eliminated duplications. All CERT ontology classes
insider threat incidents). Instead of a focus on events, our on-                 were incorporated in this way. There are, however, stark differ-
tology focuses on the insider. Our taxonomy and ontology are                     ences between the extent and scope of the CERT ontology and
based on theories and models of insider threat in the literature                 our ontology. The CERT ontology contains a standardized and
that incorporate human behavioral as well as technical indica-                   well-defined vocabulary for describing the actions of insider
tors of threat (e.g., [10-12]). While the current CERT ontology                  threats. It contains 31 actions (e.g., Copy), along with six
                                                                                 action modifiers (e.g., Suspicious), organized under four major




                                                         STIDS 2016 Proceedings Page 25
classes to describe digital, financial, and job-related insider          A brief discussion of some limitations of the research re-
threat behavior. These actions can be taken on 26 assets (e.g.,     ported here may be useful in interpreting progress to date as
USB drive) in three major categories (i.e., Physical, Financial,    well as motivating future work. First, our choice to define a
and Digital) and/or 16 types of information (e.g., Password)        taxonomy as a foundation for the ontology meant that the initial
organized in seven major categories (i.e., National Security,       structure only specified hierarchical parent-child relationships
Technology, Financial, Medical, Classified, Business, and           among constructs. Other relationships were then defined as part
Uniquely Identifiable). Eleven focal events are also captured as    of the process of transforming the taxonomy into an ontology.
classes in the ontology (e.g., Theft), for a total of 125 con-      Because our primary interest (and recognized need in modeling
structs within their class structure. In contrast to the CERT on-   insider threats) was to incorporate sociotechnical factors that
tology, our framework is broader and deeper. In addition to         have been suggested in research literature, there was also an
containing these constructs, our ontology represents a              inherent limitation in the ability to specify robust axioms that
knowledge base that is six to seven layers deep, comprising a       reflect more complex relationships among constructs. Ultimate-
total of over 350 constructs. In sum, we have greatly expanded      ly this more complete specification will be required to support
the CERT ontology by adding classes representing human be-          inferences about classes and individuals. There is a tradeoff
havioral and organizational factors of insider threat.              between implementing the asserted classes and individuals
                                                                    versus the inferred constructs. While some of the classes in our
    While not specifically addressing insider threat, the cyber-    ontology are defined by certain inference rules and axioms
security HUFO presented by [14], which focuses on trust, is         (e.g., the class Capability categorizes instances based on speci-
similar to and largely compatible with our ontology; it defines     fied rules), much more work is needed to more fully specify
roughly 48 human factors classes that address characteristics       relationships that will ultimately be required to support infer-
such as motivation, integrity, rationality, benevolence, person-    ences about insider threat risks. A second limitation is that,
ality, ideology, ethics, and risk posture, as well as knowledge,    while the current ontology has captured salient constructs in the
skills and abilities. In comparison, our ontology probes several    literature, there are certainly more constructs that can and
levels deeper than the HUFO ontology. Further work is               should be added to the ontology. Research should continue the
planned to integrate relevant features of these ontologies.         process of encapsulating the entirety of constructs related to
                                                                    insider threat. We are continually populating the individual and
           VII. CONCLUSIONS AND FUTURE WORK                         organizational classes of ontology with relevant instances (in-
     Our work addresses two major challenges. First, due to the     formed by use cases); we plan to further develop the Capabili-
large number of concepts and their complex interrelationships,      ties and Opportunities classes and associated relationships,
the insider threat domain is cumbersome to model. Second,           building upon recent related work [14]. Future research should
there is a need to establish a common terminology and shared        also focus on addressing the need to represent temporal rela-
understanding of the complex insider threat domain. We used         tionships among constructs.
an exhaustive approach that incorporates into our taxonomy              We use the present forum and others to share these results
most of the concepts we have encountered in the insider threat      with the research community. We also plan to extend our on-
literature. We then developed a mapping that transforms the         tology into a probabilistic ontology by incorporating infor-
taxonomy into an ontology, and added relationships to the           mation about uncertainty in the insider threat domain. The re-
ontology to produce a formal representation of concepts and         sulting probabilistic ontology will support reasoning under
their interrelationships. By synthesizing the contributions of a    uncertainty [45]. Probabilistic ontologies combine semantically
diverse set of experts, we developed a knowledge representa-        rich representations that support interoperability and automated
tion that more fully characterizes insider threat indicators—       reasoning with mathematically well-founded uncertainty
from the perspective of human behavior as well as                   management. Advancing research and development of proba-
cyber/technical indicators—and that can be made available in a      bilistic ontologies for insider threats will facilitate modeling
shareable knowledge base to facilitate reuse and collaboration.     and tool development. Our ontology provides a rich foundation
    Beyond its immediate use in providing a common, sharea-         for logical and probabilistic inferences necessary for protection
ble knowledge base of insider threat problem space constructs,      against insider attacks.
the present research will help to advance efforts to model and
mitigate insider threats. Informed by extant research on human                                     REFERENCES
and organizational factors associated with insider threats, the     [1]   D. M. Cappelli, A. P. Moore, and R. F. Trzeciak, The CERT guide to
constructs and indicators represented in the present ontology             insider threats: How to prevent, detect, and respond to information
can be used to develop models to assess individual risk and               technology crimes (theft, sabotage, fraud). Addison-Wesley, 2012.
organizational vulnerability, as well as to inform operational      [2]   The White House. Executive Order 13587—Structural Reforms to
risk management practices. In addition, by specifying a more              Improve the Security of Classified Networks and the Responsible
comprehensive knowledge base, our ontology facilitates the                Sharing and Safeguarding of Classified Information, October 2011.
                                                                          http://www.whitehouse.gov/the-press-office/2011/10/07/executive-
generation of diverse scenarios for use in red teaming and test-          order-structural-reforms-improve-security-classified-networks-
ing of more holistic insider threat models. Finally, the            [3]   D. L. Costa, M. Collins, J. S. Perl, J. M. Albrethsen, J.G. Silowash, and
knowledge base provided here may have further operational                 D. Spooner. (2014). An Ontology for Insider Threat Indicators. In K. B.
impact by informing the structure of data to be captured by               Laskey, I. Emmons and P C.G. Costa (Eds.), Proceedings of the Ninth
enterprises for effective insider threat monitoring and analysis.         Conference on Semantic Technologies for Intelligence, Defense, and
                                                                          Security (STIDS 2014), 2014, 48–53.




                                                 STIDS 2016 Proceedings Page 26
[4]  E. E. Schultz, “A framework for understanding and predicting insider         [24] J. L. Krofcheck and M. G. Gelles. Behavioral Consultation in Personnel
     attacks.” Computers & Security, 2002, vol. 21, 526–531.                           Security: Training and Reference Manual for Personnel Security
[5] E. D. Shaw, J. M. Post, and K. G. Ruby, “Inside the mind of the                    Professionals. Yarrow and Associates, 2005.
     insider.” Security Management, 1999, vol 43 (12), 34-42.                     [25] D. Bulling, M. Scalora, R Borum, J Panuzio, and A Donica. Behavioral
[6] M. R. Randazzo, M. M. Keeney, E. F. Kowalski, D. M. Cappelli, and A.               science guidelines for assessing insider threats. Publications of the
     P. Moore. Insider threat study: illicit cyber activity in the banking and         University of Nebraska Public Policy Center. Paper 37. 2008.
     financial sector. Carnegie-Mellon University. Software Engineering                http://digitalcommons.unl.edu/publicpolicypublications/37
     Institute. CMU/SEI-2004-TR-021, 2012.                                        [26] D. B. Parker. Fighting computer crime: A new framework for protecting
[7] S. R. Band, D. M. Cappelli, L. F. Fischer, A. P. Moore, E. D. Shaw, and            information. New York, NY: John Wiley & Sons, Inc., 1998.
     R. F. Trzeciak. Comparing insider IT sabotage and espionage: a model-        [27] F. L. Greitzer, A. P. Moore, D. M. Cappelli, D. H. Andrews, L. A.
     based analysis. Carnegie-Mellon University. Software Engineering                  Carroll, and T. D. Hull. Combating the insider threat. (2008). IEEE
     Institute. CERT Coordination Center. CMU/SEI-2006-TR-026, 2006.                   Security & Privacy, January/February 2008, 61-64.
[8] F. L. Greitzer, L. J. Kangas, C. F. Noonan, C. R. Brown, and T.               [28] E. D. Shaw, L. F. Fischer, and A. E. Rose. Insider risk evaluation and
     Ferryman. Psychosocial modeling of insider threat risk based on                   audit (No. TR-09-02). Monterey, CA: Defense Personnel Security
     behavioral and word use analysis. e-Service Journal, 2013, 9(1), 106-             Research Center, 2009.
     138. http://www.jstor.org/stable/10.2979/eservicej.9.1.106                   [29] B. Zadeh and F. L. Greitzer. “Motivation and Capability Modeling for
[9] M. Maasberg, J. Warren, and N. L.Beebe. The dark side of the insider:              Threat Anticipation.” OSD Human Social Culture Behavior (HSCB)
     Detecting the insider threat through examination of dark triad                    Modeling Program Conference. Chantilly, VA, 5-7 August 2009.
     personality traits. IEEE. 48th Hawaii International Conference on            [30] F. L. Greitzer and D. A. Frincke. D.A. “Combining traditional cyber
     System Sciences, 2015, 3518-3526. DOI 10.1109/HICSS.2015.423                      security audit data with psychosocial data: towards predictive modeling
[10] F. L. Greitzer and R. E. Hohimer. "Modeling Human Behavior to                     for insider threat,” in Insider Threats in Cyber Security. vol. 49, C. W.
     Anticipate Insider Attacks." Journal of Strategic Security, 2011,                 Probst, et al., Eds., Springer US, 2010, 85–114.
     4(2):25-48. http://scholarcommons.usf.edu/jss/vol4/iss2/                     [31] F. L. Greitzer, L. J. Kangas, C. F. Noonan, and A. Dalton. Identifying at-
[11] R. E. Hohimer, F. L. Greitzer, C. F. Noonan, and J. D.                            risk employees: A behavioral model for predicting potential insider
     Strasburg. "CHAMPION: Intelligent Hierarchical Reasoning Agents for               threats. PNNL-19665, Richland, WA: Pacific NW National Laboratory,
     Enhanced Decision Support." In Semantic Technology for Intelligence,              2010.     http://www.pnl.gov/main/publications/external/technical_reports/PNNL-
     Defense, and Security (STIDS 2011). 2011, 36-43                                   19665.pdf.
[12] E. T. Axelrad, P. J. Sticha, O. Brdiczka, and J. Shen, “A Bayesian           [32] F. L. Greitzer, L. J. Kangas, C. F. Noonan, A. Dalton, and R. E.
     network model for predicting insider threats.” IEEE SPW Workshop on               Hohimer. “Identifying at-risk employees: a behavioral model for
     Research for Insider Threat (WRIT), San Francisco, CA, 2013, 82-89.               predicting potential insider threats.” Hawaii International Conference on
[13] D. L.Costa, M. J. Albrethsen, M. L. Collins, S. J. Perl, G. J. Silowash,          System Sciences. Maui, HI, Jan 4-7, 2012.
     and D. L. Spooner. An Insider Threat Indicator Ontology. TECHNICAL           [33] Software Engineering Institute (SEI). Analytic approaches to detect
     REPORT CMU/SEI-2016-TR-007. Pittsburgh, PA: SEI, 2016.                            insider threats. White Paper, SEI, December 9, 2015.
[14] A.Oltramari, D. H. Henshel, M. Cains, and B. Hoffman. “Towards a                  http://resources.sei.cmu.edu/asset_files/WhitePaper/2015_019_001_451069.pdf
     human factors ontology for cyber security.” In Semantic Technology for       [34] S. Dekker. The field guide to human error investigations. Burlington,
     Intelligence, Defense, and Security (STIDS 2015). 2015, 26-33.                    VT: Ashgate, 2002.
[15] W. E. Chaplin, O. P. John, and L. R. Goldberg. “Conceptions of states        [35] D. J. Pond and K. R. Leifheit. “End of an error.” Security Management,
     and traits: Dimensional attributes with ideals as prototypes.” Journal of         2003, 47(5). 113–117.
     Personality and Social Psychology, 1988, 54(4), 541-557.                     [36] D. J. Pond and F. L. Greitzer. “Error-based accidents and security
[16] R. Steyer, A. Mayer, C. Geiser, and D. A. Cole. "A theory of states and           incidents in nuclear materials management.” Institute of Nuclear
     traits—Revised." Annual Review of Clinical Psychology, 2015, 11, 71-              Materials Management 46th Annual Meeting, Phoenix, AZ, 2005.
     98.                                                                               http://www.osti.gov/scitech/biblio/966022
[17] S. C. Roesch, A. A. Aldridge, S. N. Stocking, F. Villodas, Q. Leung, C.      [37] R. Baron and J. Neuman. Workplace violence and workplace aggression:
     E. Bartley, and L. J. Black. “Multilevel factor analysis and structural           Evidence on their relative frequency and potential causes. Aggressive
     equation modeling of daily diary coping data: Modeling trait and state            Behavior, 1996, vol. 22, no. 3, 161–173.
     variation. Multivariate Behavioral Research, 2010, 45(5), 767-789.           [38] F. L. Greitzer, J. Strozer, S. Cohen, J. Bergey, J. Cowley, A. Moore, and
[18] L. Van Gelder and R. E. De Vries. “Traits and states at work: Lure, risk          D. Mundie. “Unintentional insider threat: contributing factors,
     and personality as predictors of occupational crime.” Psychology, Crime           observables, and mitigation strategies.” 47th Hawaii International
     & Law, 2016, 22(7), 701-720. DOI 10.1080/1068316X.2016. 1174863                   Conference on Systems Sciences (HICSS-47), Big Island, Hawaii, 2014.
[19] D. F. Gro ̈s, L. J. Simms, M. M. Antony, and R. E. McCabe.                   [39] N. F. Noy and D. L. McGuinness. Ontology Development 101: A Guide
     “Psychometric properties of the State–Trait Inventory for Cognitive and           to Creating Your First Ontology. (SMI-2001-0880 (also available as
     Somatic Anxiety (STICSA): Comparison to the State–Trait Anxiety                   KSL Technical Report KSL-01-05)) 2001
     Inventory (STAI).” Psychological Assessment. 2007, 19(4), 369–381.           [40] M. Fernández-López, and A. Gómez-Pérez. Overview and analysis of
[20] K. S. Douglas, S. D. Hart, C. D. Webster, and H. Belfrage. HCR-20V3:              methodologies for building ontologies. The Knowledge Engineering
     Assessing risk of violence – User guide. 2013. Burnaby, Canada: Mental            Review, 2002, 17(2), 129– 156.
     Health, Law, and Policy Institute, Simon Fraser University.                  [41] L. R. Goldberg, "The structure of phenotypic personality traits."
[21] J. R. Meloy, S. G. White, and S. Hart. “Workplace assessment of                   American Psychologist, 1993, vol. 48, 26-34.
     targeted violence risk: The development and reliability of the WAVR-         [42] Z. Syed., A. Padia., T. Finin, L. Mathews, and A. Joshi. UCO: A Unified
     21”. Journal of Forensic Sciences, 2013, 58(5), 1353-1358.                        Cybersecurity Ontology (Tech.). Baltimore, MD, 2016.
[22] E. D. Shaw and L. F. Fischer. Ten Tales of Betrayal: The Threat to           [43] Claycomb, W. R., Huth, C. L., Flynn, L., McIntire, D. M., Lewellen, T.
     Corporate Infrastructures by Information Technology Insiders. Report              B., & Center, C. I. T. (2012). Chronological Examination of Insider
     1—Overview and General Observations. Technical Report 05-04, April                Threat Sabotage: Preliminary Observations. JoWUA, 3(4), 4-20.
     2005. Monterey, CA: Defense Personnel Security Research Center.
                                                                                  [44] J. R. C. Nurse, O. Buckley, P.A. Legg, M. Goldsmith, S. Creese, G. R.
[23] M. Gelles, M. Exploring the mind of the spy. In Online Employees’                 T. Wright, and M. Whitty. Understanding Insider Threat: A Framework
     Guide to Security Responsibilities: Treason 101. 2005. Retrieved from             for Characterising Attacks, 2014.
     Texas       A&M      University     Research      Foundation      website:
     http://www.dss.mil/search-dir/training/csg/security/Treason/Mind.htm         [45] R. N. Carvalho, K. B. Laskey, and P. C. Costa. “Uncertainty modeling
                                                                                       process for semantic technology.” PeerJ Computer Science, 2016, 2:e77
                                                                                       https://doi.org/10.7717/peerj-cs.77




                                                            STIDS 2016 Proceedings Page 27