<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Experimental Procedure for Evaluating User-Centered Methods for Rapid Bayesian Network Construction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Farry, Jonathan Pfautz &amp;</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ann Bisantz &amp; Richard Stone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emilie Roth</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Industrial and Systems Engineering, University at Buffalo</institution>
          ,
          <addr-line>Amherst NY 14260</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Roth Cognitive Engineering</institution>
          ,
          <addr-line>Brookline, MA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Zach Cox, Charles River Analytics, Inc.</institution>
          ,
          <addr-line>625 Mount Auburn St., Cambridge, MA 02138</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Bayesian networks (BNs) are excellent tools for reasoning about uncertainty and capturing detailed domain knowledge. However, the complexity of BN structures can pose a challenge to domain experts without a background in artificial intelligence or probability when they construct or analyze BN models. Several canonical models have been developed to reduce the complexity of BN structures, but there is little research on the accessibility and usability of these canonical models, their associated user interfaces, and the contents of the models, including their probabilistic relationships. In this paper, we present an experimental procedure to evaluate our novel Causal Influence Model structure by measuring users' ability to construct new models from scratch, and their ability to comprehend previously constructed models. [Results of our experiment will be presented at the workshop.]</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION AND MOTIVATION</title>
      <p>
        A Bayesian network (BN)
        <xref ref-type="bibr" rid="ref11 ref17">(Jensen, 2001; Pearl, 1988)</xref>
        is a
probabilistic model used to reason under uncertainty.
Successful efforts in applying Bayesian modeling to a
variety of domains (e.g., computer vision
        <xref ref-type="bibr" rid="ref19">(Rimey &amp;
Brown, 1994)</xref>
        , social networks
        <xref ref-type="bibr" rid="ref12">(Koelle et al., 2006)</xref>
        ,
human cognition
        <xref ref-type="bibr" rid="ref6 ref7">(Guarino et al., 2006; Glymour, 2001)</xref>
        ,
and disease detection
        <xref ref-type="bibr" rid="ref16">(Pang et al., 2004)</xref>
        ) have inspired
knowledge engineers to use BNs to capture domain
knowledge from experts. However, expressing an expert’s
domain knowledge in a BN is cumbersome due to the
complex, tedious, and mathematical nature of conditional
probability table (CPT) construction. Adding states and
parents to a node quickly results in an exponential
explosion in the number of CPT entries required
        <xref ref-type="bibr" rid="ref18 ref2">(Pfautz
et al., 2007)</xref>
        . Canonical models such as Noisy-OR
        <xref ref-type="bibr" rid="ref17 ref9">(Henrion, 1989; Pearl, 1988)</xref>
        , Noisy-MAX
        <xref ref-type="bibr" rid="ref3 ref5 ref9">(Diez &amp;
Galan, 2003; Diez, 1993; Henrion, 1989)</xref>
        , Qualitative
Probabilistic Networks
        <xref ref-type="bibr" rid="ref25">(Wellman, 1990)</xref>
        and Influence
Networks (IN)
        <xref ref-type="bibr" rid="ref10 ref20 ref20 ref21 ref21">(Jensen, 1996; Rosen &amp; Smith, 1996a;
Rosen &amp; Smith, 1996b)</xref>
        have been developed to mitigate
this problem. In response to some issues raised by those
models, and to simplify the Bayesian modeling process
through novel user interface techniques, we developed a
new canonical model, the Causal Influence Model (CIM)
        <xref ref-type="bibr" rid="ref18 ref18 ref2 ref2">(Cox &amp; Pfautz, 2007; Pfautz et al., 2007)</xref>
        . The CIM
paradigm was inspired by anecdotal evidence gained by
developing systems for domain experts interacting with
BNs and by an analysis of other canonical models to
determine the constraints that limit their generalizability
and applicability.
      </p>
      <p>There have been few user-centered evaluation efforts to
assess how (and if) canonical models help domain experts
elicit their knowledge and understanding of models
presented to them, or how graphical interfaces and their
features and properties impact the way people create,
interpret, reason with, or base actions on Bayesian
networks. The purpose of our study is to provide baseline
information on how people construct and describe CIMs
presented and created within a graphical user interface.</p>
      <sec id="sec-1-1">
        <title>1.1 BACKGROUND</title>
        <p>
          A canonical model
          <xref ref-type="bibr" rid="ref15 ref4">(Diez &amp; Druzdzel, 2001)</xref>
          is a
modeling pattern that allows probabilistic relationships
between nodes to be specified by a reduced set of
parameters (i.e., without completing every cell in a CPT).
By assuming that the reduced parameters can still
accurately represent the domain being modeled, users can
quickly build a complex BN that would otherwise take a
large amount of time. Most canonical models achieve
their reduced parameters by assuming the independent
effects of parents. This assumption allows a linear number
of parameters to quantify an entire CPT; in the best-case
scenario, only a single parameter per parent is needed.
Canonical models can also serve as a “front-end” tool for
the initial model-building effort, since the CPTs can
always be refined by hand or with data at a later time.
Some of the simplified patterns followed by canonical
models have been motivated by the process followed
when eliciting key factors and probabilistic relationships
from domain experts
          <xref ref-type="bibr" rid="ref14 ref8">(O'Hagan et al., 2006; Hastie &amp;
Dawes, 2001)</xref>
          .
        </p>
        <p>A review of canonical models sheds light on the
advantages and drawbacks of each model. The Influence
Network (IN) model can only be used with Boolean
nodes. It assumes that the child node has a baseline
probability of occurring independently of any parent
effects and that each parent independently influences the
child to be more or less likely to be true. Since a single
baseline probability for the child and a single change in
probability for each parent are simple parameters for users
to specify, the IN represents a powerful mechanism for
capturing domain knowledge. However, since only
Boolean nodes are allowed in the IN model, model
flexibility is significantly reduced. BNs commonly
contain nodes that represent concepts other than the
occurrence or non-occurrence of events, and INs cannot
be used to simplify these BNs without considerably
rearchitecting the model.</p>
        <p>
          The Noisy-OR model is also used only with Boolean
nodes and assumes that a true state in any parent can
cause the child to be true independently of the other
parents, with some uncertainty. Similar to INs, the main
drawback of the Noisy-OR is its limitation to only
Boolean nodes. The Noisy-MAX model generalizes the
Noisy-OR and allows ordinal nodes at the expense of
increasing the complexity of parameters. Although
NoisyMAX does work with ordinal nodes, it cannot be used
with more general discrete nodes that do not have ordered
states. These nodes, referred to as categorical nodes, have
an arbitrary number of unordered states and usually
represent the category or type of something. Qualitative
Probabilistic Networks (QPNs) allow for the construction
of purely qualitative relationships between nodes in a
network, to abstract from the highly quantitative and
numerical nature of typical Bayesian models. QPNs
consider the “signs” inherent in probabilistic relationships
between nodes, and consider the additive synergies
between nodes to capture more complicated probabilistic
relationships between them (i.e., if A and B both have a
positive influence on node C, their influences may be
synergistic in nature: if A and B are both true, their
cumulative influence upon C may be greater than just the
sum of their individual influences.) QPNs allow for more
qualitative model elicitation and may therefore be
appropriate for interactions with non-technical experts,
but they are limited in their ability to provide hard,
numerical estimates of the likelihood of events.
The Causal Influence Model (CIM) is a canonical model
that retains the desirable properties of the IN while
providing solutions to its problems. The CIM assumes
that each node is discrete and has an arbitrary number of
states with arbitrary meaning. Each node has a baseline
probability distribution, independent of any parent effects.
Each parent independently influences these baseline
probabilities to be more or less likely. The CIM also
introduces simplifications that govern the generation of
conditional probability relationships, enabling Boolean,
ordinal, and categorical nodes to be included. A full
description of the mathematical formulas that govern
CIMs, including formulas to translate CIM link strengths
into conditional probability tables, is provided in
          <xref ref-type="bibr" rid="ref18 ref2">(Cox et
al., 2007)</xref>
          .
        </p>
        <p>
          Studies have been conducted to analyze and mitigate
complexities that arise in the construction of Bayesian
models as a result of knowledge elicitation
          <xref ref-type="bibr" rid="ref15 ref4">(Onisko,
Druzdzel, &amp; Wasyluk, 2001)</xref>
          , but no studies to date have
assessed the accessibility and usability of various
canonical models and associated user interfaces when
provided directly to domain experts. The following study
investigates how users interpret and create CIMs within a
particular user interface.
2.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>METHOD</title>
      <sec id="sec-2-1">
        <title>2.1 PARTICIPANTS</title>
        <p>Up to twenty participants are recruited from the university
community to perform the study. After providing
informed consent, participants are given the Ishihara Test
for color blindness. Participants who pass this screening
continue with the study.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2 EXPERIMENTAL SYSTEM</title>
        <p>
          We have developed an CIM-enabled version of our
BNet.Builder product to allow us to experiment with
graphical interfaces for Bayesian network modeling
          <xref ref-type="bibr" rid="ref18 ref2">(Pfautz et al., 2007)</xref>
          . Using a simple point-and-click
interface, users can create, label, connect, and move nodes
in the model. Users can also create and modify causal
links to represent positive or negative influences between
nodes and the strength of those relationships. Users can
also post or remove evidence to any node and view the
effects of posted evidence on the belief states of other
nodes. Link strengths are converted using CPTs based on
algorithms provided in
          <xref ref-type="bibr" rid="ref18 ref18 ref2 ref2">(Cox et al., 2007; Pfautz et al.,
2007)</xref>
          . The positivity or negativity of a causal link and the
link strength are represented visually by the color and
thickness of the link, respectively.
        </p>
        <p>To simplify model construction for this particular
experiment, the CIM interface has been constrained so
that all nodes are Boolean; initial beliefs are set to 0.5 for
all nodes and cannot be changed directly by the user (but
can change based on evidence or link strengths); and only
“hard” evidence can be posted (e.g., evidence that the
node was either fully true, or fully false). This represents
a set of simplifications we have found useful in other
work, particularly among users less familiar with
Bayesian modeling techniques. Our main goal in this
study is to determine whether participants can reason
about previously constructed CIMs and construct models
to match a given situation. Since these are specific, novel,
and fundamental questions with little previous research
behind them, we have started with a simple case. The
inclusion of additional node types, in particular, is useful
for future work in comparing CIMs to other canonical
models such as INs, Noisy-OR, and Noisy-MAX.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3 EXPERIMENTAL TASKS</title>
        <p>Participants will be asked to provide descriptions of and
answer questions about a series of CIMs shown in the
BNet.Builder interface. In the first task, participants will
be shown a model and asked questions about the structure
and nature of relationships in the model (specifically,
questions asking them to describe elements of the model,
and questions related to abductive and deductive
reasoning using the model). For instance, given the
following example model (Figure 1), participants would
be asked:
•
•
•</p>
        <p>Description: This picture shows a model of part of a
car. Describe what causes headlights to be dim, or not
dim.</p>
        <p>Abductive Reasoning: If the headlights are dim, what
does that mean about the other parts of the car?
Deductive Reasoning: The alternator is working.
What does that suggest about the headlights? The
battery is old. What does that suggest about the
headlights? What if the battery is new and the
alternator is failing?</p>
        <p>In the second task, participants can manipulate the causal
links and post evidence to see how changing the strength
and directionality of the links between the nodes, and
evidence about the state of the nodes, affects beliefs about
whether the nodes are true or false. They will respond to
similar sets of questions as provided in the first task.
Finally, in the third task, participants will be asked to
construct models from scratch using the interface based
on several different vignettes, such as the following:
The headlight system on a car is dependent on two
components: a battery, which stores energy to power
the lights, and an alternator, which converts
mechanical energy from the car’s engine into stored
energy in the battery. When the car is running, the
alternator “recharges” the battery. This process only
works if the alternator is working, and the battery is
new.</p>
        <p>Four models/vignettes have been constructed for each
task (a total of 12). Each model has the following
relationships: 1 child/1 parent, 2 children/1 parent, 1
child/2 parents, 2 children/2 parents. In all cases, all
children are linked to all parents. Also, in all but the 1
child/1 parent case, one parent-child link is negative. This
simplification provides the basis for the initial study. We
expect to expand upon this simple representation with
later empirical work.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4 INDEPENDENT VARIABLE</title>
        <p>Two stimuli sets are created based on the 12 models.
Either the nodes in the models (or phrases in the vignette)
are phrased positively, or they include at least one node
that uses negative phrasing (e.g., “battery is not new”).
This difference allows us to investigate how semantic
properties of the model or situation affect task
performance. This condition has been inspired by our
experience in domain expert interaction with CIM
modeling interfaces, where we observed the articulation
of variable names as a source of common confusion. The
use of negatives in the variable name (e.g., “not raining”)
or logical antonyms (e.g., “happy” and “sad”) tends to
lead to later confusion in expressing causal relationships
(e.g., “if it is not not-raining, then it is unlikely that
Rakesh will not bring his umbrella”). By including this
specific independent variable, we will be able to assess
which specific patterns of reasoning are most difficult for
users. Participants are randomly assigned to one of the
two stimuli sets (up to 10 participants per condition). This
sample size is consistent with those used in usability type
tests, and will allow us to analyze verbal protocols of
participants to look for patterns across conditions.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5 DEPENDENT MEASURES AND ANALYSIS</title>
        <p>Throughout all three tasks, participants are asked to “talk
aloud” while performing the task to describe how they are
thinking about or creating the models. Screen capture
software is used to record participants’ interaction with
and construction of models. Participants are also fitted
with a view point eye tracker (lightweight glasses that
have an attached camera that tracks the corneal
movements of the participant’s eye to assess gaze relative
to the computer screen they are working on). The eye
tracking system is used to record aspects of gaze position
and dwell time at a screen location. Time to complete the
tasks is also being recorded.</p>
        <p>
          Data from the audio, eye track, and screen capture
processes is combined to create a “process trace” of each
participant’s behavior describing and creating CIMs
          <xref ref-type="bibr" rid="ref24">(Woods, 1993)</xref>
          . Verbalizations and actions are coded and
analyzed
          <xref ref-type="bibr" rid="ref22 ref24">(Bainbridge &amp; Sanderson, 1995; Sanderson &amp;
Fisher, 1994; Woods, 1993)</xref>
          to identify the correctness
and completeness of the descriptions and answers
provided by participants in the first task, the processes
with which participants constructed the models in the
second task, and the form and content of the models
produced in the third task.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>ANTICIPATED RESULTS AND</title>
    </sec>
    <sec id="sec-4">
      <title>DISCUSSION</title>
      <p>The purpose of this study is to provide baseline
information regarding how people construct and describe
CIM models presented and created within the
BNet.Builder interface. There is continued interest in
simplifying the manner in which domain expertise is
elicited, and the creation and presentation of Bayesian
network models through direct manipulation and
visualization. However, information on how these tools
are used by practitioners, how they affect the models that
people produce, and how they affect the way that people
interpret models or predict outcomes is missing. We
anticipate that users will have more difficulty explaining
and constructing models with more parent-child
connections. We also anticipate users having more
difficulty explaining and constructing models when there
are more nodes with negative causal links because of the
increase in complexity of the models.</p>
      <p>In this study, we intend to measure reasoning patterns
involving negative quantities that give users the most
trouble. We anticipate that users will have the most
difficulty interpreting and creating models when nodes
are presented with “negatively phrased” labels (e.g.,
assessing the influence of a node labeled “battery is not
new” on a node labeled “headlights are dim”). If this is
the case, it suggests a need for developers of CIMs (and
BNs in general) to encourage users to employ certain
modeling patterns, possibly by constraining the
description of nodes. These constraints, in turn, can be
accomplished through prior training or interface wizards,
or through intelligent, automatic processing of user
entries, and provision of suggested alternatives (e.g.,
popup suggestions). These interventions could be tested in
further studies.</p>
      <p>The primary contribution of this paper will be
processand product-oriented descriptions of how this graphical
tool is used to interpret and create CIMs. Future research
could compare how models created within the CIM
framework compare to those using more traditional BN
structures, from the point of view of the user. This study
used simple Bayesian models, with constrained
parameters and interaction capabilities, and used only
Boolean nodes. Future studies, guided by these initial
findings, can be conducted using more complex models, a
greater variety of node types (e.g., categorical, ordinal),
and allow subjects greater flexibility in manipulating
CPTs and posting evidence. Other issues for investigation
include measuring and mitigating user tendencies to
confuse “evidence” and “belief” (both as terms, and in the
values these terms represent), measuring tendencies to
disregard parental independence when constructing CIMs,
and further observation of user reaction to non-intuitive
but correct behavior (e.g., becoming confused when
particular variables appear overly sensitive or insensitive
to posted evidence.)
The CIM interface provides a user-friendly way to
express causal influences between nodes, vastly
decreasing the number of parameters needed to construct
causal models and providing the capability for a much
broader base of users to perform Bayesian modeling.
Within the experimental interface, participants express
relative degrees of influence over a range of 11 steps
(from positive to negative 5, with a neutral intermediate
value). Additional studies are necessary to clarify the
appropriate level of granularity of influence assignment
(e.g., 3 steps? 11 steps? 51 steps?) as well whether other
methods of assigning strengths across sets of links (e.g.,
normalized strengths, rank ordered strengths) have merit.
Finally, detailed studies with real-world models,
situations, and domain experts are required.</p>
      <sec id="sec-4-1">
        <title>Acknowledgements</title>
        <p>We would like to thank David Koelle, Geoffrey Catto,
Joseph Campolongo, Sam Mahoney, Sean Guarino, and
Eric Carlson for their contributions in the development of
the CIM and identifying hypotheses to investigate. We
also extend our deepest gratitude to Greg Zacharias for
his continued funding and support of our work with
Bayesian networks.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Bainbridge</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sanderson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Verbal protocol analysis</article-title>
          .
          <source>In J. R</source>
          . Wilson &amp;
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Corlett</surname>
          </string-name>
          (Eds.),
          <source>Evaluation of Human Work</source>
          (pp.
          <fpage>159</fpage>
          -
          <lpage>184</lpage>
          ). Boca Raton: Taylor and Francis.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Pfautz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Causal Influence Models: A Method for Simplifying Construction of Bayesian Networks</article-title>
          .
          <source>(Rep. No. R-BN07-01)</source>
          . Cambridge, MA: Charles River Analytics Inc.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Diez</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          (
          <year>1993</year>
          ).
          <article-title>Parameter Adjustment in Bayes Networks: The Generalized Noisy OR-Gate</article-title>
          .
          <source>In Proceedings of the 9th Conference of Uncertainty in Artificial Intelligence</source>
          , (pp.
          <fpage>99</fpage>
          -
          <lpage>105</lpage>
          ). San Mateo, CA: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Diez</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Druzdzel</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Fundamentals of Canonical Models</article-title>
          .
          <source>In Proceedings of Ponencia Congreso</source>
          :
          <string-name>
            <surname>IX Conferencia De La Asociacion Espanola Para La Inteligencia Artificial</surname>
          </string-name>
          (CAEPIATTIA
          <year>2001</year>
          ), (pp.
          <fpage>1125</fpage>
          -
          <lpage>1134</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Diez</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Galan</surname>
            ,
            <given-names>S. F.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>An Efficient Factorization for the Noisy MAX</article-title>
          .
          <source>International Journal o Intelligent Systems</source>
          ,
          <volume>18165</volume>
          -
          <fpage>177</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Glymour</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>The Mind's Arrows: Bayes Nets and Graphical Causal Models in Psychology</article-title>
          . Cambridge, MA: The MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Guarino</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfautz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Modeling Human Reasoning About MetaInformation</article-title>
          .
          <source>In Proceedings of 4th Bayesian Modeling Applications Workshop at the 22nd Annual Conference on Uncertainty in AI: UAI '06</source>
          . Cambridge, Massachusetts.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Hastie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Dawes</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Rational Choice in an Uncertain World: The Psychology of Judgment and Decision-Making</article-title>
          . London, UK: Sage Publications.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Henrion</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>1989</year>
          ).
          <article-title>Some Practical Issues in Constructing Belief Networks</article-title>
          . In L. Kanal,
          <string-name>
            <given-names>T.</given-names>
            <surname>Levitt</surname>
          </string-name>
          , &amp; J.
          <string-name>
            <surname>Lemmer</surname>
          </string-name>
          (Eds.),
          <source>Uncertainty in Artificial Intelligence</source>
          <volume>3</volume>
          (pp.
          <fpage>161</fpage>
          -
          <lpage>173</lpage>
          ). North Holland: Elsevier Science Publishers.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Jensen</surname>
            ,
            <given-names>F. V.</given-names>
          </string-name>
          (
          <year>1996</year>
          ).
          <article-title>An Introduction to Bayesian Networks</article-title>
          . London: University College London Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Jensen</surname>
            ,
            <given-names>F. V.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Bayesian Networks and Decision Graphs</article-title>
          . New York: Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Koelle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfautz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farry</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Catto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Campolongo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Applications of Bayesian Belief Networks in Social Network Analysis</article-title>
          .
          <source>In Proceedings of 4th Bayesian Modeling Applications Workshop at the 22nd Annual Conference on Uncertainty in AI: UAI '06</source>
          . Cambridge, Massachusetts.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Kraaijeveld</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Druzdzel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Onisko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wasyluk</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>GeNIeRate: An Interactive Generator of Diagnostic Bayesian Network Models</article-title>
          .
          <source>In Proceedings of Working Notes of the 16th International Workshop on Principles of Diagnosis (DX-05)</source>
          , (pp.
          <fpage>175</fpage>
          -
          <lpage>180</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>O'Hagan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buck</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daneshkhah</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiser</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garthwaite</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jenkinson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          et al. (
          <year>2006</year>
          ). Uncertain Judgements: Eliciting Experts' Probabilities. New York: Wiley &amp; Sons.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Onisko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Druzdzel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wasyluk</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Learning Bayesian Network Parameters From Small Data Sets: Application of Noisy-OR Gates</article-title>
          .
          <source>International Journal of Approximate Reasoning</source>
          ,
          <volume>27</volume>
          (
          <issue>2</issue>
          ),
          <fpage>165</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Pang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Computerized Tongue Diagnosis Based on Bayesian Networks</article-title>
          .
          <source>IEEE Transactions on Biomedical Engineering</source>
          ,
          <volume>51</volume>
          (
          <issue>10</issue>
          ),
          <fpage>1803</fpage>
          -
          <lpage>1810</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1988</year>
          ).
          <article-title>Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference</article-title>
          . San Mateo, CA: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Pfautz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koelle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Catto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Campolongo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>User-Centered Methods for Rapid Creation and Validation of Bayesian Networks</article-title>
          .
          <source>In Proceedings of 5th Bayesian Applications Workshop at Uncertainty in Artificial Intelligence (UAI '07)</source>
          . Vancouver, British Columbia.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Rimey</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>Control of Selective Perception Using Bayes Nets and Decision Theory</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <volume>12</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>173</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Rosen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          (
          <year>1996a</year>
          ).
          <article-title>Influence Net Modeling With Causal Strengths: An Evolutionary Approach</article-title>
          .
          <source>In Proceedings of Command and Control Research and Technology Symposium.</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Rosen</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          (
          <year>1996b</year>
          ).
          <article-title>Influencing Global Situations: A Collaborative Approach</article-title>
          . US Air Force Air Chronicles.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Sanderson</surname>
            ,
            <given-names>P. M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fisher</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>Exploratory sequential data analysis</article-title>
          .
          <source>Human Computer Interaction</source>
          ,
          <volume>9</volume>
          (
          <issue>3</issue>
          ),
          <fpage>251</fpage>
          -
          <lpage>317</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Van der Gagg</surname>
            ,
            <given-names>L.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geenen</surname>
            ,
            <given-names>P.L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tabachneck-Schijf</surname>
            ,
            <given-names>H.J.M.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Verifying Monotonicity of Bayesian Networks with Domain Experts</article-title>
          .
          <source>In Proceedings of 4th Bayesian Modeling Applications Workshop at the 22nd Annual Conference on Uncertainty in AI: UAI '06</source>
          . Cambridge, Massachusetts.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Woods</surname>
            ,
            <given-names>D. D.</given-names>
          </string-name>
          (
          <year>1993</year>
          ).
          <article-title>Process tracing methods for the study of cognition outside of the experimental psychology laboratory</article-title>
          . In G. A.
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Orasanu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Calderwood</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>C. E. Zsambok</surname>
          </string-name>
          (Eds.),
          <source>Decisionmaking in action: Models and Methods</source>
          (pp.
          <fpage>228</fpage>
          -
          <lpage>251</lpage>
          ). Norwood NJ: Ablex Publishers.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Wellman</surname>
            ,
            <given-names>M. P.</given-names>
          </string-name>
          (
          <year>1990</year>
          ).
          <article-title>Fundamental Concepts of Qualitative Probabilistic Networks</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>44</volume>
          (
          <issue>3</issue>
          ),
          <fpage>257</fpage>
          -
          <lpage>303</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>