<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Recognizing and Countering Biases in Intelligence Analysis with TIACRITIS</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Gheorghe Tecuci, David Schum, Dorin Marcu, Mihai Boicu Learning Agents Center, Volgenau School of Engineering, George Mason University</institution>
          ,
          <addr-line>Fairfax, VA 22030</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2013</year>
      </pub-date>
      <fpage>5</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>- This paper discusses different biases which have been identified in Intelligence Analysis and how TIACRITIS, a knowledge-based cognitive assistant for evidence-based hypotheses analysis, can help recognize and partially counter them. After reviewing the architecture of TIACRITIS, the paper shows how it helps recognize and counter many of the analysts' biases in the evaluation of evidence, in the perception of cause and effect, in the estimation of probabilities, and in the retrospective evaluation of intelligence reports. Then the paper introduces three other types of bias that are rarely discussed, biases of the sources of testimonial evidence, biases in the chain of custody of evidence, and biases of the consumers of intelligence, which can also be recognized and countered with TIACRITIS.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Bias, cognitive assistant, intelligence analysis, evidence-based
reasoning, argumentation, symbolic probabilities.</p>
      <p>I.</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Intelligence analysts face the difficult task of drawing
defensible and persuasive conclusions from masses of
evidence, requiring the development of often stunningly
complex arguments that establish and defend the three major
credentials of evidence: relevance, believability, and inferential
force [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This highly complex task is affected by various
biases which are inclinations or preferences that interfere with
impartial judgment. Some of the biases are due to our
simplified information processing strategies that lead to
consistent and predictable mental errors. These errors remain
compelling even when one is fully aware of their nature, and
are therefore exceedingly difficult to overcome [2, p.111-112].
      </p>
      <p>
        In this paper we propose an approach to the identification
and countering of the biases in intelligence analysis. The
approach is based on the observation that the best protection
against biases comes from the collaborative effort of teams of
analysts, who become skilled in the evidential and
argumentational elements of their tasks, and who are willing to
share their insights with colleagues, who are also willing to
listen. As we discuss in this paper, this could be achieved by
employing an intelligent analytic tool like TIACRITIS [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
which helps the analyst perform a rigorous evidence-based
hypothesis analysis that makes explicit all the reasoning steps,
probabilistic assessments, and assumptions, so that they can be
critically analyzed and debated. The name TIACRITIS is an
abbreviation of Teaching Intelligence Analysts Critical
Thinking Skills, which was the initial motivation of developing
this system. The system was later extended to also support its
use for regular analysis.
      </p>
      <p>In the next section we introduce the architecture of the
TIACRITIS cognitive assistant which is based on semantic
technologies for knowledge representation, reasoning, and
This research was partially supported by the Department of Defense and by George Mason University. The views and conclusions contained in this document are
those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department
of Defense or the U.S. Government.
learning. Then, in Section III, we address the analysts’ biases
discussed by Heuer [2, pp.111-171]: biases in the evaluation of
evidence, in the perception of cause and effect, in the
estimation of probabilities, and in the retrospective evaluation
of intelligence reports. After that we address three other origins
of bias that are rarely discussed, even though they may be at
least as important on occasion as any analysts’ biases.</p>
      <p>II.</p>
    </sec>
    <sec id="sec-3">
      <title>THE TIACRITIS COGNITIVE ASSISTANT</title>
      <p>
        TIACRITIS is a knowledge-based system that supports an
intelligence analyst in performing evidence-based hypothesis
analysis in the framework of the scientific method. It guides the
analyst to view intelligence analysis as ceaseless discovery of
evidence, hypotheses, and arguments in a non-stationary world,
involving collaborative processes of evidence in search of
hypotheses, hypotheses in search of evidence, and evidentiary
testing of hypotheses [
        <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
        ]. Fig.1 is an abstract illustration of
this astonishingly complex process. First we search for possible
hypotheses that would explain a surprising observation E* (see
the left side of Fig.1): It is possible that F might be true.
Therefore G might be true. Therefore H, a hypothesis of high
interest, might be true. The problem with drawing this
conclusion, however, is that there are other hypotheses that also
explain E*, such as F’, G’, and H’. To conclude H we would
need to assess all the competing hypotheses, showing that F, G,
and H are more likely than their competitors.
      </p>
      <p>G</p>
      <p>M
very likely</p>
      <p>H
likely</p>
      <p>G
likely</p>
      <p>F
very
likely
E*</p>
      <p>H’
G’
no
support</p>
      <p>F’
likely
•••
•••
•••</p>
      <p>H
likely
N
En*</p>
      <p>likely
min
almost Q
certain
min
very
likely
max
•••</p>
      <p>S
very
likely
Evidence in search
of hypotheses</p>
      <p>Hypotheses in
search of evidence</p>
      <p>Evidentiary tests
of hypotheses</p>
      <p>Fig. 1. Scientific method framework of TIACRITIS.</p>
      <p>Let us assume that we have shown that F and G are more
likely than their corresponding competing hypotheses. Next we
have to assess H, H’, … . To assess H we need additional
evidence which is obtained by successively decomposing H
into simpler and simpler hypotheses, as shown by the blue tree
in the right part of Fig.1. H would be true if G and M would be
true. Then M would be true if N, Q, and S would be true. But if
N would be true, then we would need to observe evidence En*.
So we look for En* and we may or may not find it. This is the
process of hypotheses in search of evidence that guides the
evidence collection task. Now some of the newly discovered
items of evidence (e.g. En*) may trigger new hypotheses, or the
refinement of the current hypotheses. Therefore, as indicated at
the bottom part of Fig.1, the processes of evidence in search of
hypotheses and hypotheses in search of evidence take place at
the same time, and in response to one another.</p>
      <p>
        Then we use all the collected evidence to assess the
hypothesis H. This assessment is probabilistic in nature
because the evidence is always incomplete, usually
inconclusive, frequently ambiguous, commonly dissonant, and
has various degrees of believability [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In the computational
theory of intelligence analysis we have developed [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
hypotheses assessment is based on a combination of ideas from
the Baconian probability system [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and the Fuzzy probability
system [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and uses a symbolic probability scale. In particular,
in the latest version of TIACRITIS, the likeliness of a
hypothesis may have one of the following ordered values:
no support &lt; likely &lt; very likely &lt; almost certain &lt; certain
In this scale, “no support” means that our evidence does not
support the conclusion that the hypothesis is true. This may,
however, change if new evidence favoring the hypothesis is
later discovered. The likeliness of an upper-level hypothesis
(e.g., H) is obtained from the likeliness of its sub-hypotheses
(i.e., G and M) by using min or max Baconian and Fuzzy
combination functions, depending on whether the
subhypotheses G and M represent necessary and sufficient
conditions for the hypothesis H, sufficient conditions, or just
indicators. Competing hypotheses (e.g., H’) are assessed in a
similar way and the most likely hypothesis is selected. But if no
hypothesis is more likely than all its competitors, then the
processes of hypotheses in search of evidence, and evidence in
search of hypotheses have to be resumed.
      </p>
      <p>
        TIACRITIS was developed by first customizing the
Disciple learning agent shell (a general agent building tool [
        <xref ref-type="bibr" rid="ref6 ref7">6,
7</xref>
        ]) into a learning agent shell for intelligence analysis, and then
by training it with analysis
knowledge from several
domains [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The overall
architecture of the
Disciple learning agent
shell for intelligence
analysis is shown in Fig.
2. It contains integrated
modules for ontology
development, rule
learning, problem solving
and evidence-based
reasoning,
mixedinitiative interaction, and
tutoring, as well as a
hierarchically organized
repository of knowledge
bases (KB). At the top
level of this repository is
the general knowledge
base for intelligence
analysis (IA KB) which
contains knowledge applicable to the evidence-based analysis
of any type of intelligence hypothesis, from any domain. Under
it, and inheriting from it, are domain-specific knowledge bases.
Each such Domain KB contains knowledge specific to a
particular type of IA problems, such as predictive analysis
related to energy sources, or assessments related to the current
production of weapons of mass destruction by various actors.
Under each Domain KB there are several Scenario KBs, each
corresponding to an instance of a problem pattern from that
domain, such as, “Assess whether the United States will be a world
leader in wind power within the next decade.” This particular
Scenario KB contains specific knowledge about the United
States, as well as items of evidence to make the corresponding
analysis. The actual analysis is done by using this knowledge
as well as more general knowledge inherited from the
corresponding Domain KB and from the IA KB.
      </p>
      <p>Knowledge Base Transactional Access</p>
      <p>Tutoring &amp;</p>
      <p>Testing
e
c
a
Iftren ESlicceitnaatrioion
r
seU Problem
la Solving
c
i
rahpG
REevBaidasesoenndcineg</p>
      <p>Repository
Management</p>
      <p>Ontology
Development
Multistrategy</p>
      <p>Learning
MixedInitiative</p>
      <p>Interaction
Asynchronous Message-Based Interaction
Knowledge Management
Domain KB</p>
      <p>Domain KB
Scenario KB</p>
      <p>Scenario KB</p>
      <p>
        Each of these knowledge bases is structured into an
ontology of concepts and a set of general problem solving rules
expressed with these concepts. The rules are learned from
specific examples of reasoning steps, by using the ontology as
a generalization hierarchy [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The learning agent shell for
intelligence analysis was obtained by training the Disciple
learning agent shell with general intelligence analysis
knowledge resulting in the development of the IA KB. The IA KB
contains both a general ontology and a set of general reasoning
rules which are necessary for any Disciple agent for
intelligence analysis, as we will briefly present in the
following. For example, Fig. 3 shows a general ontology of
evidence. It includes both basic types (e.g., testimonial
evidence and tangible evidence), as well as evidence mixtures
(e.g., testimonial evidence about tangible evidence). The
ontology language of Disciple is an extension of RDFS [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
with additional features to facilitate learning [
        <xref ref-type="bibr" rid="ref10 ref6 ref7">6, 7, 10</xref>
        ].
      </p>
      <p>Learned general rules from the IA KB include those for
directly assessing a hypothesis based on evidence. These rules
automatically reduce the assessment of a leaf hypothesis, such
as Q in Fig.1, to assessments based on favoring and disfavoring
evidence and, further down, to the assessment of the relevance
and the believability of each item of evidence with respect to
Q. Once these assessments are made, they are combined, from
bottom-up, to obtain the inferential force of all the items of
evidence on Q, which results in the likeliness of Q.</p>
      <p>
        An example of a learned rule is shown in Fig. 4. It is an
ifthen problem reduction rule that expresses how and under what
conditions a generic hypothesis can be reduced to simpler
generic hypotheses. The conditions are represented as
firstorder logical expressions [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In particular, this rule states that,
in order to assess the believability of unequivocal testimonial
evidence obtained at second hand, one needs to assess both the
believability of our source, and the believability of the source
of our source. It is by the application of such rules that an agent
can generate the reduction part of the trees in Fig.1 and Fig.5.
      </p>
      <p>The ontology and the rules from the knowledge repository
of TIACRITIS allow it to support the analyst in formulating
hypotheses, developing arguments that reduce complex
hypotheses to simpler and simpler ones (as discussed above),
collecting evidence relevant to the simplest hypotheses, and
finally assessing the relevance, the believability, and the
inferential force of evidence, and the likeliness of the
hypotheses. Additionally, TIACRITIS continuously learns
from the performed analyses.</p>
      <p>As discussed in the rest of this paper, TIACRITIS has one
additional important capability. It supports the analysts in
recognizing and countering many of their biases. Because
Heuer has made a detailed and very well-known analysis of
biases in intelligence analysis [2, pp.111-171], we follow his
classification and identified characteristic of biases to show
how TIACRITIS helps recognizing and countering many of
them.</p>
      <p>III.</p>
      <p>BIASES OF THE ANALYST
A. Biases in the Evaluation of Evidence</p>
      <p>Heuer first mentions vividness of evidence as a necessary
criterion for establishing its force. Analysts, like other persons,
have preferences for certain kinds of evidence and these
preferences can induce biases. In particular, analysts can have a
distinct preference for vivid or concrete evidence when less
vivid or concrete evidence may be more inferentially valuable.
In addition, their personal observations may be over-valued.</p>
      <p>First, as discussed in the previous section, the hypothesis in
search of evidence phase of the analysis helps identify a wide
range of evidentiary needs. For example, the argumentation in
Fig. 1 shows that we need evidence relevant to N, evidence
relevant to Q, evidence relevant to S, etc. It is unlikely that we
would have vivid evidence for each basic hypothesis. So we
would be forced to use less vivid evidence as well.</p>
      <p>Second, as illustrated by the abstract analysis example in
Fig. 5 and discussed in the following, TIACRITIS guides us to
assess a simple hypothesis Q by performing a uniform,
detailed, and systematic evaluation of each item of evidence,
regardless of its “vividness”, helping us be more objective in
the evaluation of the force of evidence.</p>
      <p>Let us first consider how to assess the probability of Q
based only on one item of favoring evidence Ek* (see the
bottom of Fig. 5). First notice that we call this likeliness of Q,
and not likelihood, because in classic probability theory
likelihood is P(Ek*|Q), while here we are interested in
P(Q|Ek*), the posterior probability of Q given Ek*. With
TIACRITIS, to assess Q based only on Ek*, we have three
judgments to make by answering three questions:</p>
      <p>The relevance question is: How likely is Q, based only on
Ek* and assuming that Ek* is true? If Ek* favors Q, then our
answer should be one of the values from “likely” to “certain.”
If Ek* is not relevant to Q then our answer should be “no
support” because Ek* provides no support for the truthfulness
of Q. If, however, Ek* disfavors Q, then it favors the negation
c
(or complement) of Q, and it should be moved under Q .</p>
      <p>The believability question is: How likely is it that Ek* is
true? Here the answer should be one of the values from “no
support” to “certain.” “Certain” means that we are sure that the
event Ek reported in Ek* did indeed happen. “No support”
means that Ek* provides us no reason to believe that the event
Ek reported in Ek* did happen. For example, we believe that
the source of Ek* has lied to us.</p>
      <p>The inferential force question is: How likely is Q based
only on Ek*? TIACRITIS automatically computes this answer
as the minimum of the relevance and believability answers.
Indeed, to believe that Q is true based only on Ek*, Ek* should
be both relevant to Q and believable.</p>
      <p>Q</p>
      <p>When we assess a hypothesis Q we may have several items
of evidence, some favoring it and some disfavoring it. The
favoring evidence is used to assess the likeliness of Q and the
c
disfavoring evidence to assess the likeliness of Q . Because
disfavoring evidence for Q is favoring evidence for Qc, the
assessment process for Qc is similar to the assessment for Q.</p>
      <p>When we have several items of favoring evidence, we
evaluate Q based on each of them (as was explained above),
and then we compose the obtained results. This is illustrated in
Fig.5 where the assessment of Q based only on Ei* (almost
certain) is composed with the assessment of Q based only on
Ek* (likely), through the maximum function, to obtain the
assessment of Q based only on favoring evidence (almost
certain). In this case the use of the maximum function is
justified because it is enough to have one item of evidence that
is both very relevant and very believable to make us believe
that the hypothesis is true.</p>
      <p>Let us now assume that Qc based only on disfavoring
evidence is “likely.” How should we combine this with the
assessment of Q based only on favoring evidence? As shown at
the top of Fig.5, TIACRITIS uses an on balance judgment:
Because Q is “almost certain” and Qc is “likely,” it concludes
that, based on all available evidence, Q is “very likely.”</p>
      <p>Heuer also mentions the absence of evidence as another
origin of bias. The bias here concerns a failure to consider the
degree of completeness of available evidence. Consider again
the argumentation from Fig. 1 which decomposes complex
hypotheses into simpler sub-hypotheses that are assessed based
on evidence. This argumentation structure makes very clear
that S is not supported by any evidence. Thus the analyst
should lower her confidence in the final conclusion, countering
the absence of evidence bias.</p>
      <p>
        The next source of bias mentioned by Heuer is a related
one: oversensitivity to evidence consistency, and not enough
concern about the amount of evidence we have. This kind of
bias can easily manifest when using an analytic tool like
Heuer’s ACH [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] where the analyst judges alternative
hypotheses based on evidence, without building any argumentation.
With TIACRITIS, the argumentation will reveal if most of the
evidence is only relevant to a small fraction of sub-hypotheses,
while many other sub-hypotheses have no evidentiary support.
For example, the argumentation from Fig. 1 shows that most of
the evidence is related to hypothesis Q.
      </p>
      <p>According to Heuer [2, pp. 121-122]: “When working with
a small but consistent body of evidence, analysts need to
consider how representative that evidence is of the total body
of potentially available information.” The argumentation from
Fig. 1 makes very clear that the available evidence is not
representative of all the potentially available information. We
have no evidence relevant to S. If we would later find such
evidence which would indicate “no support” for S, then the
considered argumentation would provide “no support” for the
top-level hypothesis H. When faced with sub-hypotheses for
which there is no evidence (e.g., S in Fig. 1), TIACRITIS
allows the analyst to consider various what-if scenarios,
making alternative assumptions with respect to the likeliness of
S, and determining their influence on the likeliness of H. This
should inform the analyst on how to adjust her confidence in
the analytic conclusion, to counter the oversensitivity to
evidence consistency bias.</p>
      <p>
        Finally, Heuer lists the persistence of impressions based on
discredited evidence as an origin of bias. If Heuer had written
his book in 2003, he might have used the case of Curveball as a
very good example [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In this case, Curveball’s evidence was
discredited on a number of grounds but was still believed and
taken seriously by some analysts as well as many others.
      </p>
      <p>TIACRITIS helps countering this bias by incorporating in
the argumentation an explicit analysis of the believability of
evidence, especially for key evidence that has a direct influence
on the analytic conclusion. When such an evidence item is
discredited, specific elements of its analysis are updated, and
this leads to the automatic updating of the likeliness of each
hypothesis to which it is relevant. For example, as shown in the
left hand side of Fig. 6, the believability of the observations
performed by a source (such as Curveball) depends on source’s
competence and credibility. Moreover, competence depends on
access and understandability. Credibility depends on veracity,
objectivity, and observational sensitivity under the conditions
of observation. Thus, the bias that would result from the
persistence of impressions based on discredited evidence is
countered in TIACRITIS with a rigorous, detailed and explicit
believability analysis.</p>
      <p>But there are additional biases in the evaluation of evidence
that Heuer does not mention, particularly with respect to
establishing the credentials of evidence: relevance,
believability, and inferential force or weight. An analyst may
Competence of S</p>
      <p>Credibility of S
min
min</p>
      <p>min
Access of S</p>
      <p>Veracity of S
confuse the competence of a HUMINT source with his/her
credibility. Or, the analyst may focus on the veracity of the
source and ignore source’s objectivity and observational
sensitivity. Analysts may fail to recognize possible synergisms
in convergent evidence, as happened in the 9/11/2001 disaster.
Analysts may even overlook evidence having significant
inferential force.</p>
      <sec id="sec-3-1">
        <title>B. Biases in the Perception of Cause and Effect</title>
        <p>As noted by Heuer, analysts seek explanations for the
occurrence of events and phenomena. These explanations
involve assessments of causes and effects. But biases arise
when analysts assign causal relations to those that are actually
accidental or random in nature. One related consequence is that
analysts often overestimate their ability to predict future events
from past events, because there is no causal association
between them. One major reason for these biases is that
analysts may not have the requisite level of understanding of
the kinds and amount of information necessary to infer a
dependable causal relationship.</p>
        <p>According to Heuer, when feasible, the “increased use of
scientific procedures in political, economic, and strategic
research is much to be encouraged”, to counter these biases [2,
p.128]. Because TIACRITIS makes all the judgments explicit,
they can be examined by other analysts to determine whether
they contain any mistakes or are incomplete. Because different
people have different biases, comparing and debating analyses
of the same hypothesis made by different analysts can also help
identify individual biases. Finally, as a learning system,
TIACRITIS can acquire correct reasoning patterns from expert
analysts which can then be used to analyze similar hypotheses.</p>
        <p>Now, here is something that can occur in any analysis
concerning chains of reasoning. It is always possible that an
analyst’s judgment will be termed biased or fallacious, on
structural grounds if it is observed that this analyst frequently
leaves out important links in his/her chains of reasoning. This
is actually a common occurrence since, in fact, there is no such
thing as a uniquely correct or perfect argument. Someone can
always find alternative arguments to the same hypothesis; what
this says is that there may be entirely different inferential routes
to the same hypothesis. Another possibility is that someone
may find arguments based on the same evidence that lead to
different hypotheses. This is precisely why there are trials at
law; the prosecution and defense will find different arguments,
and tell different stories, from the same body of evidence.</p>
      </sec>
      <sec id="sec-3-2">
        <title>C. Biases in Estimating Probabilities</title>
        <p>
          There are different views among probabilists on how to
assess the force of evidence [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The view of probability that
Heuer assumes is the conventional view of probability which
might be best called the Kolmogorov view of probability since
the Russian mathematician was the first one to put this view of
probability on an axiomatic basis [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ]. This is also the only
view of probability considerd by Heuer’s sources of inspiration
on biases: Daniel Kahneman, Amos Tversky, and their many
colleagues in psychology [
          <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
          ]. In his writings, Kolmogorov
makes it abundantly clear that his axioms apply only to
instances in which we can determine probabilities by counting.
But Heuer also notes that intelligence analysis usually deals
with one-of-a-kind situations for which there are never any
statistics. In such cases, analysts resort to subjective or personal
numerical probability expressions. He discusses several reasons
why verbal assessments of probability are frequently criticized
for their ambiguity and misunderstanding. In his discussion he
recalls Sherman Kent’s advice that verbal assessments should
always be accompanied by numerical probabilities [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
        </p>
        <p>Since Heuer only considers numerical probabilities
conforming to the Kolmogorov axioms, any biases associated
with them (e.g., using the availability rule, the anchoring
strategy, expressions of uncertainty, assessing the probability
of a scenario) are either irrelevant or not directly applicable to a
type of analysis that is based on different probability systems,
such as the one performed with TIACRITIS, which is based on
the Baconian and Fuzzy probability systems. Indeed, analysts
using TIACRITIS never assess any numerical probabilities.</p>
        <p>Heuer [2, p.122] mentions coping with evidence of
uncertain accuracy as an origin of bias: “The human mind has
difficulty coping with complicated probabilistic relationships,
so people tend to employ simple rules of thumb that reduce the
burden of processing such information. In processing
information of uncertain accuracy or reliability, analysts tend to
make a simple yes or no decision. If they reject the evidence,
they tend to reject it fully, so it plays no further role in their
mental calculations. If they accept the evidence, they tend to
accept it wholly, ignoring the probabilistic nature of the
accuracy or reliability judgment.” He then further notes [2,
p.123]: “Analysts must consider many items of evidence with
different degrees of accuracy and reliability that are related in
complex ways with varying degrees of probability to several
potential outcomes. Clearly, one cannot make neat
mathematical calculations that take all of these probabilistic
relationships into account. In making intuitive judgments, we
unconsciously seek shortcuts for sorting through this maze, and
these shortcuts involve some degree of ignoring the uncertainty
inherent in less-than-perfectly-reliable information. There
seems to be little an analyst can do about this, short of breaking
the analytical problem down in a way that permits assigning
probabilities to individual items of information, and then using
a mathematical formula to integrate these separate probability
judgments.”</p>
        <p>First, as discussed in the previous section, concerning the
believability of evidence, there is more than just its accuracy to
consider. Second, as discussed above, Heuer only considers the
conventional view of probability which, indeed, involves
complex probability computations. With TIACRITIS, the
analyst does precisely what Heuer imagined that could be done
for countering this bias. It breaks a hypothesis into simpler
hypotheses (see Fig.1), and assesses the simpler hypotheses
based on evidence (see Fig.5). Also, TIACRITIS allows the
analyst to express probabilities in words rather than numbers,
and to employ simple min/max strategies for assessing the
probability of interim and final hypotheses that do not involve
any full-scale and precise Bayesian or other methods that
would require very large numbers of probability assessments.</p>
        <p>
          There are many places to begin a defense of verbal or fuzzy
probability statements. The most obvious one is law. All of the
forensic standards of proof are given verbally: “beyond
reasonable doubt”; “clear and convincing evidence”, “balance
of probabilities”; “sufficient evidence”, and “probable cause’.
Over the centuries attempts have been made to supply
numerical probability values and ranges for each of these
standards, but none of them have been successful. The reason,
of course, is that every case is unique and rests upon many
subjective and imprecise judgments. Wigmore [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] understood
completely that the catenated inferences in his Wigmorean
networks were probabilistic in nature. Each of the arrows in the
chain of reasoning describe the force of one hypothesis on the
next one, e.g., E F. Wigmore graded the force of such
linkages verbally using such terms as “strong force”, “weak
force”, “provisional force”, etc. Toulmin [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] also used fuzzy
qualifiers in the probability statements of his system which
grounds Rationale [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. There are many other examples of
situations in which it is difficult or impossible for people to
find numerical equivalents for verbal probabilities they assess.
Intelligence analysis so often supplies very good examples in
spite of what Sherman Kent said some years ago.
        </p>
        <p>
          We conclude this discussion by recalling what the
wellknown probabilist Professor Glenn Shafer said years ago [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]:
Probability is more about structuring arguments than it is
about numbers. All probabilities rest upon arguments. If the
arguments are faulty, the probabilities however determined,
will make no sense. In TIACRITIS, the structure of the
bottomup argument is given by the logical top-down decomposition,
and the conclusions are hedged by employing rigorous
Baconian operations with fuzzy qualifiers, leading to a
defensible and persuasive argument.
        </p>
        <p>D. Hindsight Biases in Evaluating Intelligence Reporting</p>
        <p>As Heuer notes, analysts often overestimate the accuracy of
their past judgments; customers often underestimate how much
they have learned from an intelligence report; and persons who
conduct post-mortem analysis of an intelligence failure will
judge that events were more readily foreseeable than was in
fact the case. “The analyst, consumer, and overseer evaluating
analytical performance all have one thing in common. They are
exercising hindsight. They take their current state of knowledge
and compare it with what they or others did or could or should
have known before the current knowledge was received. This is
in sharp contrast with intelligence estimation, which is an
exercise in foresight, and it is the difference between these two
modes of thought—hindsight and foresight—that seems to be a
source of bias. … After a view has been restructured to
assimilate the new information, there is virtually no way to
accurately reconstruct the pre-existing mental set.” [2, p.162]</p>
        <p>Apparently Heuer did not envision the use of a system like
TIACRITIS that keeps track of the performed analysis, what
evidence we had, what assumptions we made and what were
their justifications, and what was the actual logic of our
analytic conclusion. We can now add additional evidence and
use our hindsight knowledge to restructure the argumentation
and re-evaluate our hypotheses, and we can compare the
hindsight analysis with the foresight one. But we will not
confuse them. As indicated by Heuer [2, pp.166-167]: “A
fundamental question posed in any postmortem investigation of
intelligence failure is this: Given the information that was
available at the time, should analysts have been able to foresee
what was going to happen? Unbiased evaluation of intelligence
performance depends upon the ability to provide an unbiased
answer to this question.” We suggest that this may be
accomplished with a system like TIACRTIS.</p>
        <p>IV.</p>
        <p>SOME FREQUENTLY OVERLOOKED ORIGINS OF BIAS
So much of the discussion of bias in intelligence analysis is
directed at intelligence analysts themselves. But we have
identified three other origins of bias that are rarely discussed,
even though they may be at least as important on occasion as
any analysts’ alleged biases. The three other origins of bias we
will consider are: (1) persons who provide testimonial evidence
about events of interest (i.e. HUMINT sources); (2) other
intelligence professionals having varying capabilities who
serve as links in what we term “chains of custody” linking the
evidence itself, as well as it’s sources, with the users of
evidence (i.e. the analysts); and (3) the “consumers” of
intelligence analyses (government and military officials who
make policy and decisions regarding national security).
A. HUMINT Sources</p>
        <p>
          Our concern here is with persons who supply us with
testimonial evidence consisting of reports of events about
matters of interest to us. Heuer [2, p.122] does mention the
“bias on the part of the ultimate source,” but he does not
analyze it. In our work on evidence in a variety of contexts, we
have always been concerned about establishing the
believability of its sources, particularly when they are human
witnesses, sources, or informants [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In doing so, we have
made use of the 600 year-old legacy of experience and
scholarship in the Anglo-American adversarial trial system
concerning witness believability assessments. We have
identified the three major attributes of the credibility of
ordinary witnesses: veracity, objectivity, and observational
sensitivity (see Fig. 6). We will show how there are distinct and
important possible biases associated with each such
believability attribute. These biases are recognized in the
MACE system (Method for Assessing the Credibility of
Evidence), developed for the IC [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. This system incorporates
both Baconian and Bayesian methods for combining evidence
about our source.
        </p>
        <p>As discussed above, assessing the credibility of a human
source S involves assessing S’s veracity, objectivity, and
observational sensitivity. We have to consider that source S can be
biased concerning any of these attributes. On veracity, S might
prefer to tell us that event E occurred, whether S believed E
occurred or not. As an example, an analyst evaluating S’s
evidence E* might have evidence about S suggesting that S
would tell us that E occurred because S wishes to be the bearer
of what S believes we will regard as good news that event E
occurred. On objectivity, S might choose to believe that E
occurred because it would somehow be in S’s best interests if E
did occur. On observational sensitivity, there are various ways
that S’s senses could be biased in favor of recording event E;
clever forms of deception supply examples.</p>
        <p>These three species of bias possible for HUMINT sources
must be considered by analysts attempting to assess the
credibility of source S and how much weight or force S’s
evidence E* should have in the analyst’s inference about
whether or not event E did happen. The existence of any of
these three biases would have an effect on an analyst’s
assessment of the weight or force of S’s report E*. As we
know, all assessments of the credibility of evidence rest upon
available evidence about its sources. In the case of HUMINT
we need ancillary evidence about the veracity, objectivity, and
observational sensitivity of its sources. In the process, we have
to see whether any such evidence reveals any of the three
biases just considered. TIACRITIS supports the analyst in this
determination by guiding her to answer specific questions
based on ancillary evidence. For instance, the veracity
questions considered are shown in Table 1.
3. Exploitation potential? Is this source subject to any significant
exploitation by other persons or organizations to provide us this information?
4. Any contradictory or divergent evidence? Is there any evidence that
contradicts or conflicts with what the source has reported to us?
5. Any corroborative or confirming evidence? Is there any other evidence
that corroborates or confirms this source's report?
6. Veracity concerning collateral details? Are there any contradictions or
conflicts in the collateral details provided by this source that reflect the
possibility of this source's dishonesty?
7. Source's character? What evidence do we have about this source's
character and honesty that bears upon this source's veracity?
8. Reporting record? What does the record show about the truthfulness of
this source's previous reports to us?
9. Source expectations about us? Is there any evidence that this source
may be reporting events he/she believes we will wish to hear or see?
10. Interview behavior? If this source reported these events to us, what
was this source's demeanor and bearing while giving us this report?</p>
        <sec id="sec-3-2-1">
          <title>B. Persons in Chains of Custody of Evidence</title>
          <p>Unfortunately, there are other persons, apart from
HUMINT sources, whose possible biases need to be carefully
considered. We know that analysts make use of an enormous
variety of evidence that is not testimonial or HUMINT, but is
tangible in nature. Examples include objects, images, sensor
records of various sorts, documents, maps, diagrams, charts,
and tabled information of various kinds.</p>
          <p>But the intelligence analysts only rarely have immediate
and first access to HUMINT assets or informants. They may
only rarely be the first ones to encounter an item of tangible
evidence. What happens is that there are several persons who
have access to evidence between the times the evidence is first
acquired and when the analysts first receive it. These persons
may do a variety of different things to the initial evidence
during the time they have access to it. In law, these persons
constitute what is termed a “chain of custody” for evidence.</p>
          <p>Heuer [2, p.122] mentions the “distortion in the reporting
chain from subsource through source, case officer, reports
officer, to analyst” but he does not analyze it. In criminal cases
in law, there are persons identified as “evidence custodians”,
who keep careful track of who discovered an item of evidence,
who then had access to it and for how long, and what if
anything they did to the evidence when they had access to it.</p>
          <p>
            These chains of custody add three major additional sources
of uncertainty for intelligence analysts to consider, that are
associated with the persons in chains of custody whose
competence and credibility need to be considered. The first and
most important question involves authenticity: Is the evidence
received by an analyst exactly what the initial evidence said
and is it complete? The other questions involve assessing the
reliability and accuracy of the processes used to produce the
evidence if it is tangible in nature (see the right side of Fig. 6),
or also used to take various actions on the evidence in a chain
of custody, whether the evidence is tangible or testimonial. As
an illustration, consider an item of testimonial HUMINT
coming from a foreign national whose code name is
“Wallflower”, who does not speak English [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ]. Wallflower
gives his report to case officer Bob. This report is recorded by
Bob and then translated by Husam. Then, Wallflower’s
translated report is transmitted to a report’s officer Marsha who
edits it and transmits it to the analyst Clyde who evaluates it
and assesses its weight or force.
          </p>
          <p>Now, here is where forms of bias can enter that can be
associated with the persons involved in these chains of custody.
The case officer Bob might have intentionally overlooked
details in his recording of Wallflower’s report. The translator
Husam may have intentionally altered or deleted parts of this
report. The report’s officer Marsha might have altered or
deleted parts of the translated report of Wallflower’s testimony
in her editing of it. The result of these actions is that the analyst
Clyde receiving this evidence almost certainly did not receive
an authentic and complete account of it, nor did he receive a
good account of its reliability and accuracy. What he received
was the transmitted, edited, translated, recorded testimony of
Wallflower. Fig. 7 shows how TIACRITIS may determine the
believability of the evidence received by the analyst. Although
the information to make such an analysis may not be available,
the analyst should adjust the confidence in his conclusion, in
recognition of these biases.</p>
          <p>Believability of transmitted, edited,
translated, recorded testimony of Wallflower</p>
          <p>min</p>
          <p>Believability of
edited translation</p>
          <p>Believability of
translated recording</p>
          <p>Believability of
recorded testimony
Believability
of Wallflower
min
min
min</p>
          <p>Believability of
editing by Marsha
Believability of
translation by Husam
Believability of
recording by Bob</p>
          <p>Believability of
transmission by
Marsha</p>
          <p>Fig. 7. Chain of custody of Wallflower’s testimony.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>C. Consumers of Intelligence Analyses</title>
          <p>
            The policy-making consumers or customers of intelligence
analysts are also subject to a variety of inferential and
decisional biases that may influence the reported analytic
conclusions. As is well known, the relationships between
intelligence analysts and governmental policy makers are much
discussed and involve considerable controversy [
            <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
            ]. On
the one hand we hear intelligence professionals say that they do
not make policies but only try to help policy makers be as
informed as they can be when they do form policies and make
decisions in the nation’s best interests. But we also learn facts
about the intelligence process that complicate matters. An
intelligence analysis is usually a hierarchical process involving
many intelligence officers, at various grade levels, who become
involved in producing an intelligence “product”. At the most
basic level of this hierarchy are the so-called “desk analysts”
who are known and respected experts in the specific subject
matter of the analysis at hand. An analysis produced by one or
more desk analysts is then passed “upward” through many
administrative levels, at each of which persons at these higher
levels can comment on the desk analysts’ report. It is often
recognized that the higher an editor is in this hierarchy, the
more political his/her views and actions become that may affect
the content and conclusions of the analysis at hand. As this
“upward” process continues, the analysis that results may be
quite different from the one produced by the desk analysts,
reflecting the biases of those who have successively edited it.
In some cases, these editing biases are the direct result of the
consumer’s biases who may wish to receive a certain analytic
conclusion. Using a system like TIACRITIS that shows very
clearly how the analytic conclusion is rooted in evidence would
significantly help in reducing the above biases.
          </p>
          <p>V.</p>
          <p>CONCLUSIONS</p>
          <p>A wide variety of biases affect the correctness of
intelligence analyses. In this paper we have shown how the use
of TIACRITIS, a knowledge-based cognitive assistant, helps
analysts recognize and counter many of them. TIACRITIS
integrates several semantic technologies (knowledge
representation through ontologies and rules, evidence-based
reasoning, machine learning and knowledge acquisition). It
can run in a browser as a web-based system, or it can be
installed locally, and has been used in many civilian, military,
and intelligence organizations.</p>
          <p>There are two complementary ways by which TIACRITIS
helps mitigate biases. First, as a cognitive assistant, it helps
automate many parts of the analysis process, making this task
much easier for the analyst. Thus it alleviates one of the main
causes of biases, which is the employment of simplified
information processing strategies on the part of the analyst.
Second, TIACRITIS performs a rigorous evidence-based
hypothesis analysis that makes explicit all the reasoning steps,
evidence, probabilistic assessments, and assumptions, so that
they can be critically analyzed and debated. Indeed, the best
protection against biases comes from the collaborative effort of
teams of analysts, who become skilled in solving their analytic
tasks through the development of sound evidence-based
arguments, and who are willing to share their insights with
colleagues, who are also willing to listen. TIACRITIS makes
all this possible.</p>
          <p>Finally, this paper adds a strong argument in favor of using
structured analytic methods, in the debate on how
significantly improve intelligence analysis [26].</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Schum</surname>
            <given-names>D.A.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <source>The Evidential Foundations of Probabilistic Reasoning</source>
          , Northwestern University Press.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Heuer</surname>
            <given-names>R.J.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Psychology of Intelligence Analysis, Center for the Study of Intelligence</article-title>
          , Central Intelligence Agency, Washington, DC.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Tecuci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boicu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schum</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russell</surname>
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Computational Theory and Cognitive Assistant for Intelligence Analysis</article-title>
          ,
          <source>in Proc. 6th Int. Conf. on Semantic Technologies for Intelligence</source>
          , Defense, and Security, pp.
          <fpage>68</fpage>
          -
          <lpage>75</lpage>
          , Fairfax, VA,
          <fpage>16</fpage>
          -
          <lpage>18</lpage>
          November.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Cohen</surname>
            <given-names>L.J.</given-names>
          </string-name>
          (
          <year>1977</year>
          ).
          <article-title>The Probable and the Provable</article-title>
          , Clarendon Press, Oxford.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Zadeh</surname>
            <given-names>L.</given-names>
          </string-name>
          (
          <year>1983</year>
          ).
          <article-title>The Role of Fuzzy Logic in the Management of Uncertainty in Expert Systems</article-title>
          ,
          <source>Fuzzy Sets and Systems</source>
          , vol.
          <volume>11</volume>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Tecuci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <source>Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies</source>
          , San Diego: Academic Press.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Tecuci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boicu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boicu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbulescu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <source>The Disciple-RKF Learning and Reasoning Agent, Computational Intelligence</source>
          , Vol.
          <volume>21</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>462</fpage>
          -
          <lpage>479</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Tecuci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boicu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schum</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>How Learning Enables Intelligence Analysts to Rapidly Develop Practical Cognitive Assistants</article-title>
          ,
          <source>in Proc. 12th International Conference on Machine Learning and Applications (ICMLA'13)</source>
          , Miami, Florida, December 4-7.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <issue>W3C</issue>
          (
          <year>2004</year>
          ). http://www.w3.org/TR/rdf-schema/, accessed 10/11/13.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Boicu</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tecuci</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schum</surname>
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Intelligence Analysis Ontology for Cognitive Assistants</article-title>
          ,
          <source>in Proc. of Conf. “Ontology for the Intelligence Community”</source>
          , Fairfax, VA,
          <fpage>3</fpage>
          -
          <lpage>4</lpage>
          December.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Heuer</surname>
            <given-names>R.J.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Computer-Aided Analysis of Competing Hypotheses</article-title>
          , in George R.Z.,
          <string-name>
            <surname>Bruce</surname>
            <given-names>J</given-names>
          </string-name>
          .B., eds., Analyzing Intelligence: Origins, Obstacles, and
          <string-name>
            <surname>Innovations</surname>
          </string-name>
          , Georgetown Univ. Press, Washington, DC.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Drogin</surname>
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>CURVEBALL: Spies, Lies, and the Con Man Who Caused a War</article-title>
          .
          <source>Random House</source>
          , New York, NY.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kolmogorov</surname>
            <given-names>A.N.</given-names>
          </string-name>
          (
          <year>1933</year>
          ).
          <article-title>Foundations of a Theory of Probability (1933), 2nd English edition</article-title>
          , Chelsea Publishing, New York, NY.,
          <year>1956</year>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Kolmogorov</surname>
            <given-names>A.N.</given-names>
          </string-name>
          (
          <year>1969</year>
          ).
          <source>The Theory of Probability</source>
          . In: Aleksandrov,
          <string-name>
            <given-names>A. D.</given-names>
            ,
            <surname>Kolmogorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            ,
            <surname>Lavrentiev</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. A</surname>
          </string-name>
          . (eds) Mathematics: Its Content, Methods, and Meaning. MIT Press, Cambridge, MA.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Kahneman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tversky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>1974</year>
          ).
          <article-title>Judgment under Uncertainty: Heuristics and Biases</article-title>
          .
          <source>Science</source>
          , Vol,
          <volume>185</volume>
          ,
          <fpage>1124</fpage>
          -
          <lpage>1131</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Kahneman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Slovic</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tversky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>1982</year>
          ).
          <article-title>Judgment under Uncertainty: Heuristics and Biases</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Kent</surname>
            <given-names>S.</given-names>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>Words of Estimated Probability</article-title>
          , in Steury D.P., ed.,
          <source>Sherman Kent and the Board of National Estimates: Collected Essays</source>
          ,
          <article-title>Center for the Study of Intelligence</article-title>
          , CIA, Washington, DC.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Wigmore</surname>
            <given-names>J.H.</given-names>
          </string-name>
          (
          <year>1937</year>
          ).
          <source>The Science of Judicial Proof</source>
          . Boston, MA: Little,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          &amp; Co.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Toulmin</surname>
            <given-names>S.E.</given-names>
          </string-name>
          (
          <year>1963</year>
          ).
          <source>The Uses of Argument</source>
          . Cambridge Univ. Press.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>van Gelder</surname>
            <given-names>T.J.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>The Rationale for Rationale, Law, Probability</article-title>
          and Risk,
          <volume>6</volume>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Shafer</surname>
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1988</year>
          ).
          <article-title>Combining AI and OR</article-title>
          . University of Kansas School of Business Working Paper No.
          <volume>195</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Schum</surname>
            <given-names>D.A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Morris</surname>
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Assessing the Competence and Credibility of Human Sources of Evidence: Contributions from Law and Probability</article-title>
          , Law,
          <source>Probability and Risk</source>
          , Vol.
          <volume>6</volume>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>274</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Schum</surname>
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tecuci</surname>
            <given-names>G</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Boicu</surname>
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Analyzing Evidence and its Chain of Custody: A Mixed-Initiative Computational Approach, Int</article-title>
          .
          <source>Journal of Intelligence and Counterintelligence</source>
          , Vol.
          <volume>22</volume>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>George</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bruce</surname>
            ,
            <given-names>J</given-names>
          </string-name>
          . (eds) (
          <year>2008</year>
          ). Analyzing Intelligence: Origins, Obstacles, and
          <string-name>
            <surname>Innovations</surname>
          </string-name>
          . Georgetown Univ. Press, Washingon. DC.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Johnston</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Analytic Culture in the U.S. Intelligence Community</article-title>
          . Central Intelligence Agency, Washington, DC.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>